idx
int64 1
56k
| question
stringlengths 15
155
| answer
stringlengths 2
29.2k
⌀ | question_cut
stringlengths 15
100
| answer_cut
stringlengths 2
200
⌀ | conversation
stringlengths 47
29.3k
| conversation_cut
stringlengths 47
301
|
|---|---|---|---|---|---|---|
9,201
|
Statistical podcasts
|
You may be interested in the following link: http://www.ats.ucla.edu/stat/seminars/ where the UCLA Statistical Computing unit of the UCLA has very nice screen-casts available. I have found them very useful in the past. They function essentially as lectures. Top-quality teaching.
|
Statistical podcasts
|
You may be interested in the following link: http://www.ats.ucla.edu/stat/seminars/ where the UCLA Statistical Computing unit of the UCLA has very nice screen-casts available. I have found them very u
|
Statistical podcasts
You may be interested in the following link: http://www.ats.ucla.edu/stat/seminars/ where the UCLA Statistical Computing unit of the UCLA has very nice screen-casts available. I have found them very useful in the past. They function essentially as lectures. Top-quality teaching.
|
Statistical podcasts
You may be interested in the following link: http://www.ats.ucla.edu/stat/seminars/ where the UCLA Statistical Computing unit of the UCLA has very nice screen-casts available. I have found them very u
|
9,202
|
Statistical podcasts
|
Another good podcast is In our time by the BBC. It's a weekly podcast (off air for the summer) that deals with topics in History, Religion and Science. I would say that about 1 in 12 podcasts deal with Mathematics and Statistics. Take a look at the podcast archive for Science subjects.
|
Statistical podcasts
|
Another good podcast is In our time by the BBC. It's a weekly podcast (off air for the summer) that deals with topics in History, Religion and Science. I would say that about 1 in 12 podcasts deal wit
|
Statistical podcasts
Another good podcast is In our time by the BBC. It's a weekly podcast (off air for the summer) that deals with topics in History, Religion and Science. I would say that about 1 in 12 podcasts deal with Mathematics and Statistics. Take a look at the podcast archive for Science subjects.
|
Statistical podcasts
Another good podcast is In our time by the BBC. It's a weekly podcast (off air for the summer) that deals with topics in History, Religion and Science. I would say that about 1 in 12 podcasts deal wit
|
9,203
|
Statistical podcasts
|
Check out my podcast
Www.learningmachines101.com
Which covers topics in statistical
Machine learning
|
Statistical podcasts
|
Check out my podcast
Www.learningmachines101.com
Which covers topics in statistical
Machine learning
|
Statistical podcasts
Check out my podcast
Www.learningmachines101.com
Which covers topics in statistical
Machine learning
|
Statistical podcasts
Check out my podcast
Www.learningmachines101.com
Which covers topics in statistical
Machine learning
|
9,204
|
Statistical podcasts
|
I also just realized the freakonomics has a podcast
|
Statistical podcasts
|
I also just realized the freakonomics has a podcast
|
Statistical podcasts
I also just realized the freakonomics has a podcast
|
Statistical podcasts
I also just realized the freakonomics has a podcast
|
9,205
|
Statistical podcasts
|
Keith Bower has a number of statistics related podcasts. They're pretty good and helps get the concepts down. You can get them on iTunes or his website: keithbower.com.
|
Statistical podcasts
|
Keith Bower has a number of statistics related podcasts. They're pretty good and helps get the concepts down. You can get them on iTunes or his website: keithbower.com.
|
Statistical podcasts
Keith Bower has a number of statistics related podcasts. They're pretty good and helps get the concepts down. You can get them on iTunes or his website: keithbower.com.
|
Statistical podcasts
Keith Bower has a number of statistics related podcasts. They're pretty good and helps get the concepts down. You can get them on iTunes or his website: keithbower.com.
|
9,206
|
Statistical podcasts
|
I haven't listened to the most recent episodes, but I find the talking machines: http://www.thetalkingmachines.com/ to be really good. it's done by Pr. Ryan Adams and reporter Katherine Gorman.
|
Statistical podcasts
|
I haven't listened to the most recent episodes, but I find the talking machines: http://www.thetalkingmachines.com/ to be really good. it's done by Pr. Ryan Adams and reporter Katherine Gorman.
|
Statistical podcasts
I haven't listened to the most recent episodes, but I find the talking machines: http://www.thetalkingmachines.com/ to be really good. it's done by Pr. Ryan Adams and reporter Katherine Gorman.
|
Statistical podcasts
I haven't listened to the most recent episodes, but I find the talking machines: http://www.thetalkingmachines.com/ to be really good. it's done by Pr. Ryan Adams and reporter Katherine Gorman.
|
9,207
|
Statistical podcasts
|
Not So Standard Deviations (https://soundcloud.com/nssd-podcast)
|
Statistical podcasts
|
Not So Standard Deviations (https://soundcloud.com/nssd-podcast)
|
Statistical podcasts
Not So Standard Deviations (https://soundcloud.com/nssd-podcast)
|
Statistical podcasts
Not So Standard Deviations (https://soundcloud.com/nssd-podcast)
|
9,208
|
Statistical podcasts
|
Simply Statistics is a blog about statistics and has several podcasts: http://simplystatistics.org/category/podcast/
From there about:
We are three biostatistics professors (Jeff Leek, Roger Peng, and Rafa Irizarry) who are fired up about the new era where data are abundant and statisticians are scientists.
Why “Simply Statistics”: We needed a title. Plus, we like the idea of using simple statistics to solve real, important problems. We aren’t fans of unnecessary complication -- that just leads to lies, damn lies and something else.
|
Statistical podcasts
|
Simply Statistics is a blog about statistics and has several podcasts: http://simplystatistics.org/category/podcast/
From there about:
We are three biostatistics professors (Jeff Leek, Roger Peng, an
|
Statistical podcasts
Simply Statistics is a blog about statistics and has several podcasts: http://simplystatistics.org/category/podcast/
From there about:
We are three biostatistics professors (Jeff Leek, Roger Peng, and Rafa Irizarry) who are fired up about the new era where data are abundant and statisticians are scientists.
Why “Simply Statistics”: We needed a title. Plus, we like the idea of using simple statistics to solve real, important problems. We aren’t fans of unnecessary complication -- that just leads to lies, damn lies and something else.
|
Statistical podcasts
Simply Statistics is a blog about statistics and has several podcasts: http://simplystatistics.org/category/podcast/
From there about:
We are three biostatistics professors (Jeff Leek, Roger Peng, an
|
9,209
|
Statistical podcasts
|
A podcast about using R doing statistics: http://www.r-podcast.org/
|
Statistical podcasts
|
A podcast about using R doing statistics: http://www.r-podcast.org/
|
Statistical podcasts
A podcast about using R doing statistics: http://www.r-podcast.org/
|
Statistical podcasts
A podcast about using R doing statistics: http://www.r-podcast.org/
|
9,210
|
Are there 99 percentiles, or 100 percentiles? And are they groups of numbers, or dividers or pointers to individual numbers?
|
Both of these senses of percentile, quartile, and so on are in widespread use. It’s easiest to illustrate the difference with quartiles:
the “divider” sense — there are 3 quartiles, which are the values dividing the distribution (or sample) into 4 equal parts:
1 2 3
---|---|---|---
(Sometimes this is used with max and min values included, so there are 5 quartiles numbered 0–4; note this doesn’t conflict with the numbering above, it just extends it.)
the “bin” sense: there are 4 quartiles, the subsets into which those 3 values divide the distribution (or sample)
1 2 3 4
---|---|---|---
Neither usage can reasonably be called “wrong”: both are used by many experienced practitioners, and both appear in plenty of authoritative sources (textbooks, technical dictionaries, and the like).
With quartiles, the sense being used is usually clear from context: speaking of a value in the third quartile can only be the “bin” sense, while speaking of all values below the third quartile most likely means the “divider” sense. With percentiles, the distinction is more often unclear, but it’s also not so significant for most purposes, since 1% of a distribution is so small — a narrow strip is approximately a line. Speaking of everyone above the 80th percentile might mean the top 20% or the top 19%, but in an informal context that’s not a major difference, and in rigorous work, the meaning needed should be presumably clarified by the rest of the context.
(Parts of this answer are adapted from https://math.stackexchange.com/questions/1419609/are-there-3-or-4-quartiles-99-or-100-percentiles, which also gives quotations + references.)
|
Are there 99 percentiles, or 100 percentiles? And are they groups of numbers, or dividers or pointer
|
Both of these senses of percentile, quartile, and so on are in widespread use. It’s easiest to illustrate the difference with quartiles:
the “divider” sense — there are 3 quartiles, which are the va
|
Are there 99 percentiles, or 100 percentiles? And are they groups of numbers, or dividers or pointers to individual numbers?
Both of these senses of percentile, quartile, and so on are in widespread use. It’s easiest to illustrate the difference with quartiles:
the “divider” sense — there are 3 quartiles, which are the values dividing the distribution (or sample) into 4 equal parts:
1 2 3
---|---|---|---
(Sometimes this is used with max and min values included, so there are 5 quartiles numbered 0–4; note this doesn’t conflict with the numbering above, it just extends it.)
the “bin” sense: there are 4 quartiles, the subsets into which those 3 values divide the distribution (or sample)
1 2 3 4
---|---|---|---
Neither usage can reasonably be called “wrong”: both are used by many experienced practitioners, and both appear in plenty of authoritative sources (textbooks, technical dictionaries, and the like).
With quartiles, the sense being used is usually clear from context: speaking of a value in the third quartile can only be the “bin” sense, while speaking of all values below the third quartile most likely means the “divider” sense. With percentiles, the distinction is more often unclear, but it’s also not so significant for most purposes, since 1% of a distribution is so small — a narrow strip is approximately a line. Speaking of everyone above the 80th percentile might mean the top 20% or the top 19%, but in an informal context that’s not a major difference, and in rigorous work, the meaning needed should be presumably clarified by the rest of the context.
(Parts of this answer are adapted from https://math.stackexchange.com/questions/1419609/are-there-3-or-4-quartiles-99-or-100-percentiles, which also gives quotations + references.)
|
Are there 99 percentiles, or 100 percentiles? And are they groups of numbers, or dividers or pointer
Both of these senses of percentile, quartile, and so on are in widespread use. It’s easiest to illustrate the difference with quartiles:
the “divider” sense — there are 3 quartiles, which are the va
|
9,211
|
Are there 99 percentiles, or 100 percentiles? And are they groups of numbers, or dividers or pointers to individual numbers?
|
Take this answer with a grain of salt -- it started out fairly wrong and I am still deciding what to do with it.
The question is partly about language and usage, whereas this answer focuses on mathematics. I hope that the mathematics will provide a framework for understanding different usages.
One nice way to treat this is to start with simple math and work backwards to the more complicated case of real data. Let's start with PDF's, CDF's, and inverse CDF's (also known as quantile functions). The $x$th quantile of a distribution with pdf $f$ and cdf $F$ is $F^{-1}(x)$. Suppose the $z$th percentile is $F^{-1}(z/100)$. This provides a way to pin down the ambiguity you identify: we can look at situations where $F$ is 1) not invertible, 2) only invertible on a certain domain, or 3) invertible but its inverse never attains certain values.
Example of 1): I'll leave this for last; keep reading.
Example of 2): For a uniform 0,1 distribution, the CDF is invertible when restricted to [0, 1], so the 100th and 0th percentiles could be defined as $F^{-1}(1)$ and $F^{-1}(0)$ given that caveat. Otherwise, they are ill-defined since $F(-0.5)$ (for example) is also 0.
Another example of 2): For a uniform distribution on the two disjoint intervals from 0 to 1 and 2 to 3, the CDF looks like this.
Most quantiles of this distribution exist and are unique, but the median (50th percentile) is inherently ambiguous. In R, they go half-way: quantile(c(runif(100), runif(100) + 2), 0.5) returns about 1.5.
Example of 3): For a normal distribution, the 100th and 0th percentiles do not exist (or they "are" $\pm \infty$). This is because the normal CDF never attains 0 or 1.
Discussion of 1): For "nice" cdf's, such as with non-extreme quantiles or continuous distributions, the percentiles exist and are unique. But for a discrete distribution such as the Poisson distribution, my definition is ambiguous because for most $z/100$, there is no $y$ with $F(y) = z/100$. For a Poisson distribution with expectation 1, the CDF looks like this.
For the 60th percentile, R returns 1 (quantile(c(rpois(lambda = 1, n = 1000) ), 0.60)). For the 65th percentile, R also returns 1. You can think of this as drawing 100 observations, ranking them low to high, and returning the 60th or 65th item. If you do this, you will most often get 1.
When it comes to real data, all distributions are discrete. (The empirical CDF of runif(100) or np.random.random(100) has 100 increments clustered around 0.5.) But, rather than treating them as discrete, R's quantile function seems to treat them as samples from continuous distributions. For example, the median (the 50th percentile or 0.5 quantile) of the sample 3,4, 5, 6, 7, 8 is given as 5.5. If you draw 2n samples from a unif(3,8) distribution and take any number between the nth and (n+1)th sample, you will converge on 5.5 as n increases.
It's interesting to also consider the discrete uniform distribution with equal probability of hitting 3,4,5,6,7,8. (A die roll plus two.) If you take the sample-and-rank approach outlined above for the Poisson distribution, you will usually get 5 or 6. As the samples get bigger, the distribution for the number halfway up will converge on half fives and half sixes. 5.5 seems like a reasonable compromise here too.
|
Are there 99 percentiles, or 100 percentiles? And are they groups of numbers, or dividers or pointer
|
Take this answer with a grain of salt -- it started out fairly wrong and I am still deciding what to do with it.
The question is partly about language and usage, whereas this answer focuses on mathem
|
Are there 99 percentiles, or 100 percentiles? And are they groups of numbers, or dividers or pointers to individual numbers?
Take this answer with a grain of salt -- it started out fairly wrong and I am still deciding what to do with it.
The question is partly about language and usage, whereas this answer focuses on mathematics. I hope that the mathematics will provide a framework for understanding different usages.
One nice way to treat this is to start with simple math and work backwards to the more complicated case of real data. Let's start with PDF's, CDF's, and inverse CDF's (also known as quantile functions). The $x$th quantile of a distribution with pdf $f$ and cdf $F$ is $F^{-1}(x)$. Suppose the $z$th percentile is $F^{-1}(z/100)$. This provides a way to pin down the ambiguity you identify: we can look at situations where $F$ is 1) not invertible, 2) only invertible on a certain domain, or 3) invertible but its inverse never attains certain values.
Example of 1): I'll leave this for last; keep reading.
Example of 2): For a uniform 0,1 distribution, the CDF is invertible when restricted to [0, 1], so the 100th and 0th percentiles could be defined as $F^{-1}(1)$ and $F^{-1}(0)$ given that caveat. Otherwise, they are ill-defined since $F(-0.5)$ (for example) is also 0.
Another example of 2): For a uniform distribution on the two disjoint intervals from 0 to 1 and 2 to 3, the CDF looks like this.
Most quantiles of this distribution exist and are unique, but the median (50th percentile) is inherently ambiguous. In R, they go half-way: quantile(c(runif(100), runif(100) + 2), 0.5) returns about 1.5.
Example of 3): For a normal distribution, the 100th and 0th percentiles do not exist (or they "are" $\pm \infty$). This is because the normal CDF never attains 0 or 1.
Discussion of 1): For "nice" cdf's, such as with non-extreme quantiles or continuous distributions, the percentiles exist and are unique. But for a discrete distribution such as the Poisson distribution, my definition is ambiguous because for most $z/100$, there is no $y$ with $F(y) = z/100$. For a Poisson distribution with expectation 1, the CDF looks like this.
For the 60th percentile, R returns 1 (quantile(c(rpois(lambda = 1, n = 1000) ), 0.60)). For the 65th percentile, R also returns 1. You can think of this as drawing 100 observations, ranking them low to high, and returning the 60th or 65th item. If you do this, you will most often get 1.
When it comes to real data, all distributions are discrete. (The empirical CDF of runif(100) or np.random.random(100) has 100 increments clustered around 0.5.) But, rather than treating them as discrete, R's quantile function seems to treat them as samples from continuous distributions. For example, the median (the 50th percentile or 0.5 quantile) of the sample 3,4, 5, 6, 7, 8 is given as 5.5. If you draw 2n samples from a unif(3,8) distribution and take any number between the nth and (n+1)th sample, you will converge on 5.5 as n increases.
It's interesting to also consider the discrete uniform distribution with equal probability of hitting 3,4,5,6,7,8. (A die roll plus two.) If you take the sample-and-rank approach outlined above for the Poisson distribution, you will usually get 5 or 6. As the samples get bigger, the distribution for the number halfway up will converge on half fives and half sixes. 5.5 seems like a reasonable compromise here too.
|
Are there 99 percentiles, or 100 percentiles? And are they groups of numbers, or dividers or pointer
Take this answer with a grain of salt -- it started out fairly wrong and I am still deciding what to do with it.
The question is partly about language and usage, whereas this answer focuses on mathem
|
9,212
|
Are there 99 percentiles, or 100 percentiles? And are they groups of numbers, or dividers or pointers to individual numbers?
|
There are other ways to calculate percentiles, what follows, is not the only one. Taken from this Source.
The meaning of percentile can be captured by stating that the $p$th percentile of a distribution is a number such that approximately $p$ percent ($p\%$) of the values in the distribution are equal to or less than that number. So, if $28$ is the $80$th percentile of a larger batch of numbers, $80$% of those numbers are less than or equal to $28$.
To calculate percentiles, sort the data so that $x_1$ is the smallest value, and $x_n$ is the largest,
with $n$ = total number of observations, $x_i$ is the $p_i$th percentile of the data set where:
$p_i = \dfrac{100(i - 0.5)}{n}$
Example from the same notes for illustration:
To take a single example, $7$ is the $50$th percentile of the distribution, and about half of the values in the distribution are equal to or less than $7$.
If you had 200 numbers, there'd be 100 percentiles, but would each refer to a group of two numbers.
No.
Assuming the numbers are sorted in ascending order moving from $x_1$ to $x_\mathrm{200}$. In this case the percentiles are:
$\dfrac{100(1-0.5)}{200}$, $\dfrac{100(2-0.5)}{200}$, $\dfrac{100(3-0.5)}{200}$, $...$
resulting in
$0.25, 0.75, 1.25 ... $ percentiles corresponding to indices $1, 2, 3, ...$
|
Are there 99 percentiles, or 100 percentiles? And are they groups of numbers, or dividers or pointer
|
There are other ways to calculate percentiles, what follows, is not the only one. Taken from this Source.
The meaning of percentile can be captured by stating that the $p$th percentile of a distrib
|
Are there 99 percentiles, or 100 percentiles? And are they groups of numbers, or dividers or pointers to individual numbers?
There are other ways to calculate percentiles, what follows, is not the only one. Taken from this Source.
The meaning of percentile can be captured by stating that the $p$th percentile of a distribution is a number such that approximately $p$ percent ($p\%$) of the values in the distribution are equal to or less than that number. So, if $28$ is the $80$th percentile of a larger batch of numbers, $80$% of those numbers are less than or equal to $28$.
To calculate percentiles, sort the data so that $x_1$ is the smallest value, and $x_n$ is the largest,
with $n$ = total number of observations, $x_i$ is the $p_i$th percentile of the data set where:
$p_i = \dfrac{100(i - 0.5)}{n}$
Example from the same notes for illustration:
To take a single example, $7$ is the $50$th percentile of the distribution, and about half of the values in the distribution are equal to or less than $7$.
If you had 200 numbers, there'd be 100 percentiles, but would each refer to a group of two numbers.
No.
Assuming the numbers are sorted in ascending order moving from $x_1$ to $x_\mathrm{200}$. In this case the percentiles are:
$\dfrac{100(1-0.5)}{200}$, $\dfrac{100(2-0.5)}{200}$, $\dfrac{100(3-0.5)}{200}$, $...$
resulting in
$0.25, 0.75, 1.25 ... $ percentiles corresponding to indices $1, 2, 3, ...$
|
Are there 99 percentiles, or 100 percentiles? And are they groups of numbers, or dividers or pointer
There are other ways to calculate percentiles, what follows, is not the only one. Taken from this Source.
The meaning of percentile can be captured by stating that the $p$th percentile of a distrib
|
9,213
|
Are there 99 percentiles, or 100 percentiles? And are they groups of numbers, or dividers or pointers to individual numbers?
|
I was taught that an observation in the nth percentile was greater than n% of observations in the dataset under consideration. Which to me implies that there is no 0th or 100th percentile. No observation can be greater than 100% of observations because it forms part of that 100% (and a similar logic applies in the case of 0).
Edit: For what it's worth, this is also consistent with non-academic usage of the term that I've encountered: "X is in the nth percentile" implies that the percentile is the group, not a boundary.
I unfortunately have no source for this that I can point you to.
|
Are there 99 percentiles, or 100 percentiles? And are they groups of numbers, or dividers or pointer
|
I was taught that an observation in the nth percentile was greater than n% of observations in the dataset under consideration. Which to me implies that there is no 0th or 100th percentile. No observat
|
Are there 99 percentiles, or 100 percentiles? And are they groups of numbers, or dividers or pointers to individual numbers?
I was taught that an observation in the nth percentile was greater than n% of observations in the dataset under consideration. Which to me implies that there is no 0th or 100th percentile. No observation can be greater than 100% of observations because it forms part of that 100% (and a similar logic applies in the case of 0).
Edit: For what it's worth, this is also consistent with non-academic usage of the term that I've encountered: "X is in the nth percentile" implies that the percentile is the group, not a boundary.
I unfortunately have no source for this that I can point you to.
|
Are there 99 percentiles, or 100 percentiles? And are they groups of numbers, or dividers or pointer
I was taught that an observation in the nth percentile was greater than n% of observations in the dataset under consideration. Which to me implies that there is no 0th or 100th percentile. No observat
|
9,214
|
Are there 99 percentiles, or 100 percentiles? And are they groups of numbers, or dividers or pointers to individual numbers?
|
Note- I will accept somebody else's answer rather than mine. But I do see some useful comments so I'm just writing an answer that mentions those.
Based on Nick's answer "-iles" terminology for the top half a percent
it seems that the terms are ambiguous, and I suppose (based on my understanding of that post), better terminology would be X% point, and X%-Y% group; so quantile point(so for quartile points that could be anything from 0 to 4); quantile group ranging from X quantile point to Y quantile point.
Either way one would get 101 for percentiles, although one comment suggests that one could refer to 101 points (I suppose if you counted percentile points, and only integers), but even then, if one speaks of 1st, 2nd, 3rd, percentile or quantile, it's counting and one can't count the first as 0, and you can't have e.g. more than 4 quartiles or more than 100 percentiles. So if talking 1st, 2nd, 3rd, that terminology can't really refer to point 0. If somebody said 0th point, then while it's clear they mean point 0, I think they should really say quantile point 0. Or Quantile group at point 0. Even computer scientists wouldn't say 0th; even they count the first item as 1, and if they call it item 0, that's an indexing from 0, not a count.
A comment mentions "There can't be 100. Either 99 or 101, depending on whether you count maximum and minimum". I think there's a case for 99 or 101, when talking about quantile points rather than groups, though I wouldn't say 0th. For n items, An index may go from 0...n-1 and one wouldn't write th/st e.g. 1st, 2nd etc, on an index(unless perhaps the index happened to index the first item as 1). But an index starting the first item with index of 0 isn't a 1st, 2nd 3rd count. e.g. item with index of 0 is the 1st item, one wouldn't say 0th and label the second item 1st.
|
Are there 99 percentiles, or 100 percentiles? And are they groups of numbers, or dividers or pointer
|
Note- I will accept somebody else's answer rather than mine. But I do see some useful comments so I'm just writing an answer that mentions those.
Based on Nick's answer "-iles" terminology for the top
|
Are there 99 percentiles, or 100 percentiles? And are they groups of numbers, or dividers or pointers to individual numbers?
Note- I will accept somebody else's answer rather than mine. But I do see some useful comments so I'm just writing an answer that mentions those.
Based on Nick's answer "-iles" terminology for the top half a percent
it seems that the terms are ambiguous, and I suppose (based on my understanding of that post), better terminology would be X% point, and X%-Y% group; so quantile point(so for quartile points that could be anything from 0 to 4); quantile group ranging from X quantile point to Y quantile point.
Either way one would get 101 for percentiles, although one comment suggests that one could refer to 101 points (I suppose if you counted percentile points, and only integers), but even then, if one speaks of 1st, 2nd, 3rd, percentile or quantile, it's counting and one can't count the first as 0, and you can't have e.g. more than 4 quartiles or more than 100 percentiles. So if talking 1st, 2nd, 3rd, that terminology can't really refer to point 0. If somebody said 0th point, then while it's clear they mean point 0, I think they should really say quantile point 0. Or Quantile group at point 0. Even computer scientists wouldn't say 0th; even they count the first item as 1, and if they call it item 0, that's an indexing from 0, not a count.
A comment mentions "There can't be 100. Either 99 or 101, depending on whether you count maximum and minimum". I think there's a case for 99 or 101, when talking about quantile points rather than groups, though I wouldn't say 0th. For n items, An index may go from 0...n-1 and one wouldn't write th/st e.g. 1st, 2nd etc, on an index(unless perhaps the index happened to index the first item as 1). But an index starting the first item with index of 0 isn't a 1st, 2nd 3rd count. e.g. item with index of 0 is the 1st item, one wouldn't say 0th and label the second item 1st.
|
Are there 99 percentiles, or 100 percentiles? And are they groups of numbers, or dividers or pointer
Note- I will accept somebody else's answer rather than mine. But I do see some useful comments so I'm just writing an answer that mentions those.
Based on Nick's answer "-iles" terminology for the top
|
9,215
|
Do neural networks learn a function or a probability density function?
|
Strictly speaking, neural networks are fitting a non-linear function.
They can be interpreted as fitting a probability density function if suitable activation functions are chosen and certain conditions are respected (Values must be positive and $\leq$ 1, etc...). But that is a question of how you choose to interpret their output, not of what they are actually doing. Under the hood, they are still non-linear function estimators, which you are choosing to apply to the specific problem of PDF estimation.
|
Do neural networks learn a function or a probability density function?
|
Strictly speaking, neural networks are fitting a non-linear function.
They can be interpreted as fitting a probability density function if suitable activation functions are chosen and certain conditi
|
Do neural networks learn a function or a probability density function?
Strictly speaking, neural networks are fitting a non-linear function.
They can be interpreted as fitting a probability density function if suitable activation functions are chosen and certain conditions are respected (Values must be positive and $\leq$ 1, etc...). But that is a question of how you choose to interpret their output, not of what they are actually doing. Under the hood, they are still non-linear function estimators, which you are choosing to apply to the specific problem of PDF estimation.
|
Do neural networks learn a function or a probability density function?
Strictly speaking, neural networks are fitting a non-linear function.
They can be interpreted as fitting a probability density function if suitable activation functions are chosen and certain conditi
|
9,216
|
Do neural networks learn a function or a probability density function?
|
Generally Neural Networks are not used to model complete probability densities. Their focus is to just model the mean of a distribution (or in a deterministic situation simply a non-linear function). Nevertheless it is very possible to model complete probability densities via Neural Networks.
One easy way to do this is for example for a Gaussian case is to emit the mean from one output and variance from another output of the network and then minimize $-log N(y | x ;\mu,\sigma)$ function as part of the training process instead of the common squared error. This the maximum likelihood procedure for a Neural Network.
Once you train this network everytime you plug an $x$ value as an input it will give you the $\mu$ and the $\sigma$, then you can plug the entire triplet $y,\mu,\sigma$ to the density $f(y|x)\sim N(\mu,\sigma)$ to obtain the density value for any $y$ you like. At this stage you can chose which $y$ value to use based on a real domain loss function. One thing to keep in mind is that for $\mu$ the output activation should be unrestricted so that you can emit $-\inf$ to $+\inf$ while $\sigma$ should be a positive only activation.
In general, unless it is a deterministic function that we are after, the standard squared loss training used in neural networks is pretty much the same procedure I described above. Under the hood a $Gaussian$ distribution is assumed implicitly without caring about the $\sigma$ and if you examine carefully $-log N(y|x;\mu,\sigma)$ gives you an expression for squared loss (The loss function of the Gaussian maximum likelihood estimator). In this scenario, however, instead of a $y$ value to your liking you are stuck with emitting $\mu$ everytime when given a new $x$ value.
For classification the output will be a $Bernoulli$ distribution instead of a $Gaussian$, which has a single parameter to emit. As specified in the other answer this parameter is between $0$ and $1$ so that output activation should be accordingly. It can be a logistic function or something else that achieves the same purpose.
A more sophisticated approach is Bishop's Mixture Density Networks. You can read about it in the frequently referenced paper here:
https://publications.aston.ac.uk/373/1/NCRG_94_004.pdf
|
Do neural networks learn a function or a probability density function?
|
Generally Neural Networks are not used to model complete probability densities. Their focus is to just model the mean of a distribution (or in a deterministic situation simply a non-linear function).
|
Do neural networks learn a function or a probability density function?
Generally Neural Networks are not used to model complete probability densities. Their focus is to just model the mean of a distribution (or in a deterministic situation simply a non-linear function). Nevertheless it is very possible to model complete probability densities via Neural Networks.
One easy way to do this is for example for a Gaussian case is to emit the mean from one output and variance from another output of the network and then minimize $-log N(y | x ;\mu,\sigma)$ function as part of the training process instead of the common squared error. This the maximum likelihood procedure for a Neural Network.
Once you train this network everytime you plug an $x$ value as an input it will give you the $\mu$ and the $\sigma$, then you can plug the entire triplet $y,\mu,\sigma$ to the density $f(y|x)\sim N(\mu,\sigma)$ to obtain the density value for any $y$ you like. At this stage you can chose which $y$ value to use based on a real domain loss function. One thing to keep in mind is that for $\mu$ the output activation should be unrestricted so that you can emit $-\inf$ to $+\inf$ while $\sigma$ should be a positive only activation.
In general, unless it is a deterministic function that we are after, the standard squared loss training used in neural networks is pretty much the same procedure I described above. Under the hood a $Gaussian$ distribution is assumed implicitly without caring about the $\sigma$ and if you examine carefully $-log N(y|x;\mu,\sigma)$ gives you an expression for squared loss (The loss function of the Gaussian maximum likelihood estimator). In this scenario, however, instead of a $y$ value to your liking you are stuck with emitting $\mu$ everytime when given a new $x$ value.
For classification the output will be a $Bernoulli$ distribution instead of a $Gaussian$, which has a single parameter to emit. As specified in the other answer this parameter is between $0$ and $1$ so that output activation should be accordingly. It can be a logistic function or something else that achieves the same purpose.
A more sophisticated approach is Bishop's Mixture Density Networks. You can read about it in the frequently referenced paper here:
https://publications.aston.ac.uk/373/1/NCRG_94_004.pdf
|
Do neural networks learn a function or a probability density function?
Generally Neural Networks are not used to model complete probability densities. Their focus is to just model the mean of a distribution (or in a deterministic situation simply a non-linear function).
|
9,217
|
Do neural networks learn a function or a probability density function?
|
My dissenting answer is that in most impressive practical applications (those where they get the most coverage in the media, for instance) it's neither the function nor the probabilities. They implement stochastic decision making.
On the surface it looks like NN are just fitting the function, queue the universal approximation reference. In some cases, when certain activation functions and particular assumptions such as Gaussian errors are used or when you read papers on Bayesian networks, it appears that NN can produce the probability distributions.
However, this is all just by the way. What NN are intended to do is to model decision making. When a car is driven by AI, its NN is not trying to calculate the probability that it has an object in front of it, then given that there is an object to calculate the probability that it's a human. Neither it is calculating the mapping of sensor inputs to various kinds of objects. No, NN is supposed to make a decision based on all the input to steer sideways or keep driving through. It's not calculating the probability, it's telling the car what to do.
|
Do neural networks learn a function or a probability density function?
|
My dissenting answer is that in most impressive practical applications (those where they get the most coverage in the media, for instance) it's neither the function nor the probabilities. They impleme
|
Do neural networks learn a function or a probability density function?
My dissenting answer is that in most impressive practical applications (those where they get the most coverage in the media, for instance) it's neither the function nor the probabilities. They implement stochastic decision making.
On the surface it looks like NN are just fitting the function, queue the universal approximation reference. In some cases, when certain activation functions and particular assumptions such as Gaussian errors are used or when you read papers on Bayesian networks, it appears that NN can produce the probability distributions.
However, this is all just by the way. What NN are intended to do is to model decision making. When a car is driven by AI, its NN is not trying to calculate the probability that it has an object in front of it, then given that there is an object to calculate the probability that it's a human. Neither it is calculating the mapping of sensor inputs to various kinds of objects. No, NN is supposed to make a decision based on all the input to steer sideways or keep driving through. It's not calculating the probability, it's telling the car what to do.
|
Do neural networks learn a function or a probability density function?
My dissenting answer is that in most impressive practical applications (those where they get the most coverage in the media, for instance) it's neither the function nor the probabilities. They impleme
|
9,218
|
No regularisation term for bias unit in neural network
|
Overfitting usually requires the output of the model to be sensitive to small changes in the input data (i.e. to exactly interpolate the target values, you tend to need a lot of curvature in the fitted function). The bias parameters don't contribute to the curvature of the model, so there is usually little point in regularising them as well.
|
No regularisation term for bias unit in neural network
|
Overfitting usually requires the output of the model to be sensitive to small changes in the input data (i.e. to exactly interpolate the target values, you tend to need a lot of curvature in the fitte
|
No regularisation term for bias unit in neural network
Overfitting usually requires the output of the model to be sensitive to small changes in the input data (i.e. to exactly interpolate the target values, you tend to need a lot of curvature in the fitted function). The bias parameters don't contribute to the curvature of the model, so there is usually little point in regularising them as well.
|
No regularisation term for bias unit in neural network
Overfitting usually requires the output of the model to be sensitive to small changes in the input data (i.e. to exactly interpolate the target values, you tend to need a lot of curvature in the fitte
|
9,219
|
No regularisation term for bias unit in neural network
|
The motivation behind L2 (or L1) is that by restricting the weights, constraining the network, you are less likely to overfit. It makes little sense to restrict the weights of the biases since the biases are fixed (e.g. b = 1) thus work like neuron intercepts, which make sense to be given a higher flexibility.
|
No regularisation term for bias unit in neural network
|
The motivation behind L2 (or L1) is that by restricting the weights, constraining the network, you are less likely to overfit. It makes little sense to restrict the weights of the biases since the bia
|
No regularisation term for bias unit in neural network
The motivation behind L2 (or L1) is that by restricting the weights, constraining the network, you are less likely to overfit. It makes little sense to restrict the weights of the biases since the biases are fixed (e.g. b = 1) thus work like neuron intercepts, which make sense to be given a higher flexibility.
|
No regularisation term for bias unit in neural network
The motivation behind L2 (or L1) is that by restricting the weights, constraining the network, you are less likely to overfit. It makes little sense to restrict the weights of the biases since the bia
|
9,220
|
No regularisation term for bias unit in neural network
|
Weights determine slopes of the activation functions. Regularization reduces the weights and hence the slopes of the activation functions. This reduces the model variance and the overfitting effect. The biases have no influence on the slopes of activation functions. However, they have an influence on the position of the activation functions in space. Their optimal values depend on the weights, so they should be adjusted to the regularized weights. The biases should be adjusted without regularization. Their regularization can be harmful. I considered the functions of weights and biases in randomized NN, see here
|
No regularisation term for bias unit in neural network
|
Weights determine slopes of the activation functions. Regularization reduces the weights and hence the slopes of the activation functions. This reduces the model variance and the overfitting effect. T
|
No regularisation term for bias unit in neural network
Weights determine slopes of the activation functions. Regularization reduces the weights and hence the slopes of the activation functions. This reduces the model variance and the overfitting effect. The biases have no influence on the slopes of activation functions. However, they have an influence on the position of the activation functions in space. Their optimal values depend on the weights, so they should be adjusted to the regularized weights. The biases should be adjusted without regularization. Their regularization can be harmful. I considered the functions of weights and biases in randomized NN, see here
|
No regularisation term for bias unit in neural network
Weights determine slopes of the activation functions. Regularization reduces the weights and hence the slopes of the activation functions. This reduces the model variance and the overfitting effect. T
|
9,221
|
No regularisation term for bias unit in neural network
|
I would add that the bias term is often initialized with a mean of 1 rather than of 0, so we might want to regularize it in a way to not get too far away from a constant value like 1 such as doing 1/2*(bias-1)^2 rather than 1/2*(bias)^2.
Maybe that replacing the -1 part by a subtraction to the mean of the biases could help, maybe a per-layer mean or an overall one. Yet this is just a hypothesis I am doing (about the mean substraction).
This all depends on the activation function too. E.g.: sigmoids might be bad here for vanishing gradients if biases are regularized to high constant offsets.
|
No regularisation term for bias unit in neural network
|
I would add that the bias term is often initialized with a mean of 1 rather than of 0, so we might want to regularize it in a way to not get too far away from a constant value like 1 such as doing 1/2
|
No regularisation term for bias unit in neural network
I would add that the bias term is often initialized with a mean of 1 rather than of 0, so we might want to regularize it in a way to not get too far away from a constant value like 1 such as doing 1/2*(bias-1)^2 rather than 1/2*(bias)^2.
Maybe that replacing the -1 part by a subtraction to the mean of the biases could help, maybe a per-layer mean or an overall one. Yet this is just a hypothesis I am doing (about the mean substraction).
This all depends on the activation function too. E.g.: sigmoids might be bad here for vanishing gradients if biases are regularized to high constant offsets.
|
No regularisation term for bias unit in neural network
I would add that the bias term is often initialized with a mean of 1 rather than of 0, so we might want to regularize it in a way to not get too far away from a constant value like 1 such as doing 1/2
|
9,222
|
No regularisation term for bias unit in neural network
|
The tutorial says "applying weight decay to the bias units usually makes only a small difference to the final network", so if it does not help, then you can stop doing it to eliminate one hyperparameter. If you think regularizing the offset would help in your setup, then cross-validate it; there's no harm in trying.
|
No regularisation term for bias unit in neural network
|
The tutorial says "applying weight decay to the bias units usually makes only a small difference to the final network", so if it does not help, then you can stop doing it to eliminate one hyperparamet
|
No regularisation term for bias unit in neural network
The tutorial says "applying weight decay to the bias units usually makes only a small difference to the final network", so if it does not help, then you can stop doing it to eliminate one hyperparameter. If you think regularizing the offset would help in your setup, then cross-validate it; there's no harm in trying.
|
No regularisation term for bias unit in neural network
The tutorial says "applying weight decay to the bias units usually makes only a small difference to the final network", so if it does not help, then you can stop doing it to eliminate one hyperparamet
|
9,223
|
Is it essential to do normalization for SVM and Random Forest?
|
The answer to your question depends on what similarity/distance function you plan to use (in SVMs). If it's simple (unweighted) Euclidean distance, then if you don't normalize your data you are unwittingly giving some features more importance than others.
For example, if your first dimension ranges from 0-10, and second dimension from 0-1, a difference of 1 in the first dimension (just a tenth of the range) contributes as much in the distance computation as two wildly different values in the second dimension (0 and 1). So by doing this, you're exaggerating small differences in the first dimension. You could of course come up with a custom distance function or weight your dimensions by an expert's estimate, but this will lead to a lot of tunable parameters depending on dimensionality of your data. In this case, normalization is an easier path (although not necessarily ideal) because you can at least get started.
Finally, still for SVMs, another thing you can do is come up with a similarity function rather than a distance function and plug it in as a kernel (technically this function must generate positive-definite matrices). This function can be constructed any way you like and can take into account the disparity in ranges of features.
For random forests on the other hand, since one feature is never compared in magnitude to other features, the ranges don't matter. It's only the range of one feature that is split at each stage.
|
Is it essential to do normalization for SVM and Random Forest?
|
The answer to your question depends on what similarity/distance function you plan to use (in SVMs). If it's simple (unweighted) Euclidean distance, then if you don't normalize your data you are unwitt
|
Is it essential to do normalization for SVM and Random Forest?
The answer to your question depends on what similarity/distance function you plan to use (in SVMs). If it's simple (unweighted) Euclidean distance, then if you don't normalize your data you are unwittingly giving some features more importance than others.
For example, if your first dimension ranges from 0-10, and second dimension from 0-1, a difference of 1 in the first dimension (just a tenth of the range) contributes as much in the distance computation as two wildly different values in the second dimension (0 and 1). So by doing this, you're exaggerating small differences in the first dimension. You could of course come up with a custom distance function or weight your dimensions by an expert's estimate, but this will lead to a lot of tunable parameters depending on dimensionality of your data. In this case, normalization is an easier path (although not necessarily ideal) because you can at least get started.
Finally, still for SVMs, another thing you can do is come up with a similarity function rather than a distance function and plug it in as a kernel (technically this function must generate positive-definite matrices). This function can be constructed any way you like and can take into account the disparity in ranges of features.
For random forests on the other hand, since one feature is never compared in magnitude to other features, the ranges don't matter. It's only the range of one feature that is split at each stage.
|
Is it essential to do normalization for SVM and Random Forest?
The answer to your question depends on what similarity/distance function you plan to use (in SVMs). If it's simple (unweighted) Euclidean distance, then if you don't normalize your data you are unwitt
|
9,224
|
Is it essential to do normalization for SVM and Random Forest?
|
Random Forest is invariant to monotonic transformations of individual features. Translations or per feature scalings will not change anything for the Random Forest. SVM will probably do better if your features have roughly the same magnitude, unless you know apriori that some feature is much more important than others, in which case it's okay for it to have a larger magnitude.
|
Is it essential to do normalization for SVM and Random Forest?
|
Random Forest is invariant to monotonic transformations of individual features. Translations or per feature scalings will not change anything for the Random Forest. SVM will probably do better if yo
|
Is it essential to do normalization for SVM and Random Forest?
Random Forest is invariant to monotonic transformations of individual features. Translations or per feature scalings will not change anything for the Random Forest. SVM will probably do better if your features have roughly the same magnitude, unless you know apriori that some feature is much more important than others, in which case it's okay for it to have a larger magnitude.
|
Is it essential to do normalization for SVM and Random Forest?
Random Forest is invariant to monotonic transformations of individual features. Translations or per feature scalings will not change anything for the Random Forest. SVM will probably do better if yo
|
9,225
|
What is the relationship between regression and linear discriminant analysis (LDA)?
|
I take it that the question is about LDA and linear (not logistic) regression.
There is a considerable and meaningful relation between linear regression and linear discriminant analysis. In case the dependent variable (DV) consists just of 2 groups the two analyses are actually identical. Despite that computations are different and the results - regression and discriminant coefficients - are not the same, they are exactly proportional to each other.
Now for the more-than-two-groups situation. First, let us state that LDA (its extraction, not classification stage) is equivalent (linearly related results) to canonical correlation analysis if you turn the grouping DV into a set of dummy variables (with one redundant of them dropped out) and do canonical analysis with sets "IVs" and "dummies". Canonical variates on the side of "IVs" set that you obtain are what LDA calls "discriminant functions" or "discriminants".
So, then how canonical analysis is related to linear regression? Canonical analysis is in essence a MANOVA (in the sense "Multivariate Multiple linear regression" or "Multivariate general linear model") deepened into latent structure of relationships between the DVs and the IVs. These two variations are decomposed in their inter-relations into latent "canonical variates". Let us take the simplest example, Y vs X1 X2 X3. Maximization of correlation between the two sides is linear regression (if you predict Y by Xs) or - which is the same thing - is MANOVA (if you predict Xs by Y). The correlation is unidimensional (with magnitude R^2 = Pillai's trace) because the lesser set, Y, consists just of one variable. Now let's take these two sets: Y1 Y2 vs X1 x2 x3. The correlation being maximized here is 2-dimensional because the lesser set contains 2 variables. The first and stronger latent dimension of the correlation is called the 1st canonical correlation, and the remaining part, orthogonal to it, the 2nd canonical correlation. So, MANOVA (or linear regression) just asks what are partial roles (the coefficients) of variables in the whole 2-dimensional correlation of sets; while canonical analysis just goes below to ask what are partial roles of variables in the 1st correlational dimension, and in the 2nd.
Thus, canonical correlation analysis is multivariate linear regression deepened into latent structure of relationship between the DVs and IVs. Discriminant analysis is a particular case of canonical correlation analysis (see exactly how). So, here was the answer about the relation of LDA to linear regression in a general case of more-than-two-groups.
Note that my answer does not at all see LDA as classification technique. I was discussing LDA only as extraction-of-latents technique. Classification is the second and stand-alone stage of LDA (I described it here). @Michael Chernick was focusing on it in his answers.
|
What is the relationship between regression and linear discriminant analysis (LDA)?
|
I take it that the question is about LDA and linear (not logistic) regression.
There is a considerable and meaningful relation between linear regression and linear discriminant analysis. In case the d
|
What is the relationship between regression and linear discriminant analysis (LDA)?
I take it that the question is about LDA and linear (not logistic) regression.
There is a considerable and meaningful relation between linear regression and linear discriminant analysis. In case the dependent variable (DV) consists just of 2 groups the two analyses are actually identical. Despite that computations are different and the results - regression and discriminant coefficients - are not the same, they are exactly proportional to each other.
Now for the more-than-two-groups situation. First, let us state that LDA (its extraction, not classification stage) is equivalent (linearly related results) to canonical correlation analysis if you turn the grouping DV into a set of dummy variables (with one redundant of them dropped out) and do canonical analysis with sets "IVs" and "dummies". Canonical variates on the side of "IVs" set that you obtain are what LDA calls "discriminant functions" or "discriminants".
So, then how canonical analysis is related to linear regression? Canonical analysis is in essence a MANOVA (in the sense "Multivariate Multiple linear regression" or "Multivariate general linear model") deepened into latent structure of relationships between the DVs and the IVs. These two variations are decomposed in their inter-relations into latent "canonical variates". Let us take the simplest example, Y vs X1 X2 X3. Maximization of correlation between the two sides is linear regression (if you predict Y by Xs) or - which is the same thing - is MANOVA (if you predict Xs by Y). The correlation is unidimensional (with magnitude R^2 = Pillai's trace) because the lesser set, Y, consists just of one variable. Now let's take these two sets: Y1 Y2 vs X1 x2 x3. The correlation being maximized here is 2-dimensional because the lesser set contains 2 variables. The first and stronger latent dimension of the correlation is called the 1st canonical correlation, and the remaining part, orthogonal to it, the 2nd canonical correlation. So, MANOVA (or linear regression) just asks what are partial roles (the coefficients) of variables in the whole 2-dimensional correlation of sets; while canonical analysis just goes below to ask what are partial roles of variables in the 1st correlational dimension, and in the 2nd.
Thus, canonical correlation analysis is multivariate linear regression deepened into latent structure of relationship between the DVs and IVs. Discriminant analysis is a particular case of canonical correlation analysis (see exactly how). So, here was the answer about the relation of LDA to linear regression in a general case of more-than-two-groups.
Note that my answer does not at all see LDA as classification technique. I was discussing LDA only as extraction-of-latents technique. Classification is the second and stand-alone stage of LDA (I described it here). @Michael Chernick was focusing on it in his answers.
|
What is the relationship between regression and linear discriminant analysis (LDA)?
I take it that the question is about LDA and linear (not logistic) regression.
There is a considerable and meaningful relation between linear regression and linear discriminant analysis. In case the d
|
9,226
|
What is the relationship between regression and linear discriminant analysis (LDA)?
|
Here is a reference to one of Efron's papers: The Efficiency of Logistic Regression Compared to Normal Discriminant Analysis, 1975.
Another relevant paper is Ng & Jordan, 2001, On Discriminative vs. Generative classifierers: A comparison of logistic regression and naive Bayes. And here is an abstract of a comment on it by Xue & Titterington, 2008, that mentions O'Neill's papers related to his PhD dissertation:
Comparison of generative and discriminative classifiers is an
ever-lasting topic. As an important contribution to this topic, based
on their theoretical and empirical comparisons between the naïve Bayes
classifier and linear logistic regression, Ng and Jordan (NIPS
841---848, 2001) claimed that there exist two distinct regimes of
performance between the generative and discriminative classifiers with
regard to the training-set size. In this paper, our empirical and
simulation studies, as a complement of their work, however, suggest
that the existence of the two distinct regimes may not be so reliable.
In addition, for real world datasets, so far there is no theoretically
correct, general criterion for choosing between the discriminative and
the generative approaches to classification of an observation $x$ into
a class $y$; the choice depends on the relative confidence we have in
the correctness of the specification of either $p(y|x)$ or $p(x, y)$
for the data. This can be to some extent a demonstration of why Efron
(J Am Stat Assoc 70(352):892---898, 1975) and O'Neill (J Am Stat Assoc
75(369):154---160, 1980) prefer normal-based linear discriminant
analysis (LDA) when no model mis-specification occurs but other
empirical studies may prefer linear logistic regression instead.
Furthermore, we suggest that pairing of either LDA assuming a common
diagonal covariance matrix (LDA) or the naïve Bayes classifier and
linear logistic regression may not be perfect, and hence it may not be
reliable for any claim that was derived from the comparison between
LDA or the naïve Bayes classifier and linear logistic regression to be
generalised to all generative and discriminative classifiers.
There are a lot of other references on this that you can find online.
|
What is the relationship between regression and linear discriminant analysis (LDA)?
|
Here is a reference to one of Efron's papers: The Efficiency of Logistic Regression Compared to Normal Discriminant Analysis, 1975.
Another relevant paper is Ng & Jordan, 2001, On Discriminative vs. G
|
What is the relationship between regression and linear discriminant analysis (LDA)?
Here is a reference to one of Efron's papers: The Efficiency of Logistic Regression Compared to Normal Discriminant Analysis, 1975.
Another relevant paper is Ng & Jordan, 2001, On Discriminative vs. Generative classifierers: A comparison of logistic regression and naive Bayes. And here is an abstract of a comment on it by Xue & Titterington, 2008, that mentions O'Neill's papers related to his PhD dissertation:
Comparison of generative and discriminative classifiers is an
ever-lasting topic. As an important contribution to this topic, based
on their theoretical and empirical comparisons between the naïve Bayes
classifier and linear logistic regression, Ng and Jordan (NIPS
841---848, 2001) claimed that there exist two distinct regimes of
performance between the generative and discriminative classifiers with
regard to the training-set size. In this paper, our empirical and
simulation studies, as a complement of their work, however, suggest
that the existence of the two distinct regimes may not be so reliable.
In addition, for real world datasets, so far there is no theoretically
correct, general criterion for choosing between the discriminative and
the generative approaches to classification of an observation $x$ into
a class $y$; the choice depends on the relative confidence we have in
the correctness of the specification of either $p(y|x)$ or $p(x, y)$
for the data. This can be to some extent a demonstration of why Efron
(J Am Stat Assoc 70(352):892---898, 1975) and O'Neill (J Am Stat Assoc
75(369):154---160, 1980) prefer normal-based linear discriminant
analysis (LDA) when no model mis-specification occurs but other
empirical studies may prefer linear logistic regression instead.
Furthermore, we suggest that pairing of either LDA assuming a common
diagonal covariance matrix (LDA) or the naïve Bayes classifier and
linear logistic regression may not be perfect, and hence it may not be
reliable for any claim that was derived from the comparison between
LDA or the naïve Bayes classifier and linear logistic regression to be
generalised to all generative and discriminative classifiers.
There are a lot of other references on this that you can find online.
|
What is the relationship between regression and linear discriminant analysis (LDA)?
Here is a reference to one of Efron's papers: The Efficiency of Logistic Regression Compared to Normal Discriminant Analysis, 1975.
Another relevant paper is Ng & Jordan, 2001, On Discriminative vs. G
|
9,227
|
What is the relationship between regression and linear discriminant analysis (LDA)?
|
The purpose of this answer is to explain the exact mathematical relationship between linear discriminant analysis (LDA) and multivariate linear regression (MLR). It will turn out that the correct framework is provided by reduced rank regression (RRR).
We will show that LDA is equivalent to RRR of the whitened class indicator matrix on the data matrix.
Notation
Let $\newcommand{\X}{\mathbf X}\X$ be the $n\times d$ matrix with data points $\newcommand{\x}{\mathbf x}\x_i$ in rows and variables in columns. Each point belongs to one of the $k$ classes, or groups. Point $\x_i$ belongs to class number $g(i)$.
Let $\newcommand{\G}{\mathbf G}\G$ be the $n \times k$ indicator matrix encoding group membership as follows: $G_{ij}=1$ if $\x_i$ belongs to class $j$, and $G_{ij}=0$ otherwise. There are $n_j$ data points in class $j$; of course $\sum n_j = n$.
We assume that the data are centered and so the global mean is equal to zero, $\newcommand{\bmu}{\boldsymbol \mu}\bmu=0$. Let $\bmu_j$ be the mean of class $j$.
LDA
The total scatter matrix $\newcommand{\C}{\mathbf C}\C=\X^\top \X$ can be decomposed into the sum of between-class and within-class scatter matrices defined as follows:
\begin{align}
\C_b &= \sum_j n_j \bmu_j \bmu_j^\top \\
\C_w &= \sum(\x_i - \bmu_{g(i)})(\x_i - \bmu_{g(i)})^\top.
\end{align}
One can verify that $\C = \C_b + \C_w$. LDA searches for discriminant axes that have maximal between-group variance and minimal within-group variance of the projection. Specifically, first discriminant axis is the unit vector $\newcommand{\w}{\mathbf w}\w$ maximizing $\w^\top \C_b \w / (\w^\top \C_w \w)$, and the first $p$ discriminant axes stacked together into a matrix $\newcommand{\W}{\mathbf W}\W$ should maximize the trace $$\DeclareMathOperator{\tr}{tr} L_\mathrm{LDA}=\tr\left(\W^\top \C_b \W (\W^\top \C_w \W)^{-1}\right).$$
Assuming that $\C_w$ is full rank, LDA solution $\W_\mathrm{LDA}$ is the matrix of eigenvectors of $\C_w^{-1} \C_b$ (ordered by the eigenvalues in the decreasing order).
This was the usual story. Now let us make two important observations.
First, within-class scatter matrix can be replaced by the total scatter matrix (ultimately because maximizing $b/w$ is equivalent to maximizing $b/(b+w)$), and indeed, it is easy to see that $\C^{-1} \C_b$ has the same eigenvectors.
Second, the between-class scatter matrix can be expressed via the group membership matrix defined above. Indeed, $\G^\top \X$ is the matrix of group sums. To get the matrix of group means, it should be multiplied by a diagonal matrix with $n_j$ on the diagonal; it's given by $\G^\top \G$. Hence, the matrix of group means is $(\G^\top \G)^{-1}\G^\top \X$ (sapienti will notice that it's a regression formula). To get $\C_b$ we need to take its scatter matrix, weighted by the same diagonal matrix, obtaining $$\C_b = \X^\top \G (\G^\top \G)^{-1}\G^\top \X.$$ If all $n_j$ are identical and equal to $m$ ("balanced dataset"), then this expression simplifies to $\X^\top \G \G^\top \X / m$.
We can define normalized indicator matrix $\newcommand{\tG}{\widetilde {\mathbf G}}\tG$ as having $1/\sqrt{n_j}$ where $\G$ has $1$. Then for both, balanced and unbalanced datasets, the expression is simply $\C_b = \X^\top \tG \tG^\top \X$. Note that $\tG$ is, up to a constant factor, the whitened indicator matrix: $\tG = \G(\G^\top \G)^{-1/2}$.
Regression
For simplicity, we will start with the case of a balanced dataset.
Consider linear regression of $\G$ on $\X$. It finds $\newcommand{\B}{\mathbf B}\B$ minimizing $\| \G - \X \B\|^2$. Reduced rank regression does the same under the constraint that $\B$ should be of the given rank $p$. If so, then $\B$ can be written as $\newcommand{\D}{\mathbf D} \newcommand{\F}{\mathbf F} \B=\D\F^\top$ with both $\D$ and $\F$ having $p$ columns. One can show that the rank two solution can be obtained from the rank solution by keeping the first column and adding an extra column, etc.
To establish the connection between LDA and linear regression, we will prove that $\D$ coincides with $\W_\mathrm{LDA}$.
The proof is straightforward. For the given $\D$, optimal $\F$ can be found via regression: $\F^\top = (\D^\top \X^\top \X \D)^{-1} \D^\top \X^\top \G$. Plugging this into the loss function, we get $$\| \G - \X \D (\D^\top \X^\top \X \D)^{-1} \D^\top \X^\top \G\|^2,$$ which can be written as trace using the identity $\|\mathbf A\|^2=\mathrm{tr}(\mathbf A \mathbf A^\top)$. After easy manipulations we get that the regression is equivalent to maximizing (!) the following scary trace: $$\tr\left(\D^\top \X^\top \G \G^\top \X \D (\D^\top \X^\top \X \D)^{-1}\right),$$ which is actually nothing else than $$\ldots = \tr\left(\D^\top \C_b \D (\D^\top \C \D)^{-1}\right)/m \sim L_\mathrm{LDA}.$$
This finishes the proof. For unbalanced datasets we need to replace $\G$ with $\tG$.
One can similarly show that adding ridge regularization to the reduced rank regression is equivalent to the regularized LDA.
Relationship between LDA, CCA, and RRR
In his answer, @ttnphns made a connection to canonical correlation analysis (CCA). Indeed, LDA can be shown to be equivalent to CCA between $\X$ and $\G$. In addition, CCA between any $\newcommand{\Y}{\mathbf Y}\Y$ and $\X$ can be written as RRR predicting whitened $\Y$ from $\X$. The rest follows from this.
Bibliography
It is hard to say who deserves the credit for what is presented above.
There is a recent conference paper by Cai et al. (2013) On The Equivalent of Low-Rank Regressions and Linear Discriminant Analysis Based Regressions that presents exactly the same proof as above but creates the impression that they invented this approach. This is definitely not the case. Torre wrote a detailed treatment of how most of the common linear multivariate methods can be seen as reduced rank regression, see A Least-Squares Framework for Component Analysis, 2009, and a later book chapter A unification of component analysis methods, 2013; he presents the same argument but does not give any references either. This material is also covered in the textbook Modern Multivariate Statistical Techniques (2008) by Izenman, who introduced RRR back in 1975.
The relationship between LDA and CCA apparently goes back to Bartlett, 1938, Further aspects of the theory of multiple regression -- that's the reference I often encounter (but did not verify). The relationship between CCA and RRR is described in the Izenman, 1975, Reduced-rank regression for the multivariate linear model. So all of these ideas have been around for a while.
|
What is the relationship between regression and linear discriminant analysis (LDA)?
|
The purpose of this answer is to explain the exact mathematical relationship between linear discriminant analysis (LDA) and multivariate linear regression (MLR). It will turn out that the correct fram
|
What is the relationship between regression and linear discriminant analysis (LDA)?
The purpose of this answer is to explain the exact mathematical relationship between linear discriminant analysis (LDA) and multivariate linear regression (MLR). It will turn out that the correct framework is provided by reduced rank regression (RRR).
We will show that LDA is equivalent to RRR of the whitened class indicator matrix on the data matrix.
Notation
Let $\newcommand{\X}{\mathbf X}\X$ be the $n\times d$ matrix with data points $\newcommand{\x}{\mathbf x}\x_i$ in rows and variables in columns. Each point belongs to one of the $k$ classes, or groups. Point $\x_i$ belongs to class number $g(i)$.
Let $\newcommand{\G}{\mathbf G}\G$ be the $n \times k$ indicator matrix encoding group membership as follows: $G_{ij}=1$ if $\x_i$ belongs to class $j$, and $G_{ij}=0$ otherwise. There are $n_j$ data points in class $j$; of course $\sum n_j = n$.
We assume that the data are centered and so the global mean is equal to zero, $\newcommand{\bmu}{\boldsymbol \mu}\bmu=0$. Let $\bmu_j$ be the mean of class $j$.
LDA
The total scatter matrix $\newcommand{\C}{\mathbf C}\C=\X^\top \X$ can be decomposed into the sum of between-class and within-class scatter matrices defined as follows:
\begin{align}
\C_b &= \sum_j n_j \bmu_j \bmu_j^\top \\
\C_w &= \sum(\x_i - \bmu_{g(i)})(\x_i - \bmu_{g(i)})^\top.
\end{align}
One can verify that $\C = \C_b + \C_w$. LDA searches for discriminant axes that have maximal between-group variance and minimal within-group variance of the projection. Specifically, first discriminant axis is the unit vector $\newcommand{\w}{\mathbf w}\w$ maximizing $\w^\top \C_b \w / (\w^\top \C_w \w)$, and the first $p$ discriminant axes stacked together into a matrix $\newcommand{\W}{\mathbf W}\W$ should maximize the trace $$\DeclareMathOperator{\tr}{tr} L_\mathrm{LDA}=\tr\left(\W^\top \C_b \W (\W^\top \C_w \W)^{-1}\right).$$
Assuming that $\C_w$ is full rank, LDA solution $\W_\mathrm{LDA}$ is the matrix of eigenvectors of $\C_w^{-1} \C_b$ (ordered by the eigenvalues in the decreasing order).
This was the usual story. Now let us make two important observations.
First, within-class scatter matrix can be replaced by the total scatter matrix (ultimately because maximizing $b/w$ is equivalent to maximizing $b/(b+w)$), and indeed, it is easy to see that $\C^{-1} \C_b$ has the same eigenvectors.
Second, the between-class scatter matrix can be expressed via the group membership matrix defined above. Indeed, $\G^\top \X$ is the matrix of group sums. To get the matrix of group means, it should be multiplied by a diagonal matrix with $n_j$ on the diagonal; it's given by $\G^\top \G$. Hence, the matrix of group means is $(\G^\top \G)^{-1}\G^\top \X$ (sapienti will notice that it's a regression formula). To get $\C_b$ we need to take its scatter matrix, weighted by the same diagonal matrix, obtaining $$\C_b = \X^\top \G (\G^\top \G)^{-1}\G^\top \X.$$ If all $n_j$ are identical and equal to $m$ ("balanced dataset"), then this expression simplifies to $\X^\top \G \G^\top \X / m$.
We can define normalized indicator matrix $\newcommand{\tG}{\widetilde {\mathbf G}}\tG$ as having $1/\sqrt{n_j}$ where $\G$ has $1$. Then for both, balanced and unbalanced datasets, the expression is simply $\C_b = \X^\top \tG \tG^\top \X$. Note that $\tG$ is, up to a constant factor, the whitened indicator matrix: $\tG = \G(\G^\top \G)^{-1/2}$.
Regression
For simplicity, we will start with the case of a balanced dataset.
Consider linear regression of $\G$ on $\X$. It finds $\newcommand{\B}{\mathbf B}\B$ minimizing $\| \G - \X \B\|^2$. Reduced rank regression does the same under the constraint that $\B$ should be of the given rank $p$. If so, then $\B$ can be written as $\newcommand{\D}{\mathbf D} \newcommand{\F}{\mathbf F} \B=\D\F^\top$ with both $\D$ and $\F$ having $p$ columns. One can show that the rank two solution can be obtained from the rank solution by keeping the first column and adding an extra column, etc.
To establish the connection between LDA and linear regression, we will prove that $\D$ coincides with $\W_\mathrm{LDA}$.
The proof is straightforward. For the given $\D$, optimal $\F$ can be found via regression: $\F^\top = (\D^\top \X^\top \X \D)^{-1} \D^\top \X^\top \G$. Plugging this into the loss function, we get $$\| \G - \X \D (\D^\top \X^\top \X \D)^{-1} \D^\top \X^\top \G\|^2,$$ which can be written as trace using the identity $\|\mathbf A\|^2=\mathrm{tr}(\mathbf A \mathbf A^\top)$. After easy manipulations we get that the regression is equivalent to maximizing (!) the following scary trace: $$\tr\left(\D^\top \X^\top \G \G^\top \X \D (\D^\top \X^\top \X \D)^{-1}\right),$$ which is actually nothing else than $$\ldots = \tr\left(\D^\top \C_b \D (\D^\top \C \D)^{-1}\right)/m \sim L_\mathrm{LDA}.$$
This finishes the proof. For unbalanced datasets we need to replace $\G$ with $\tG$.
One can similarly show that adding ridge regularization to the reduced rank regression is equivalent to the regularized LDA.
Relationship between LDA, CCA, and RRR
In his answer, @ttnphns made a connection to canonical correlation analysis (CCA). Indeed, LDA can be shown to be equivalent to CCA between $\X$ and $\G$. In addition, CCA between any $\newcommand{\Y}{\mathbf Y}\Y$ and $\X$ can be written as RRR predicting whitened $\Y$ from $\X$. The rest follows from this.
Bibliography
It is hard to say who deserves the credit for what is presented above.
There is a recent conference paper by Cai et al. (2013) On The Equivalent of Low-Rank Regressions and Linear Discriminant Analysis Based Regressions that presents exactly the same proof as above but creates the impression that they invented this approach. This is definitely not the case. Torre wrote a detailed treatment of how most of the common linear multivariate methods can be seen as reduced rank regression, see A Least-Squares Framework for Component Analysis, 2009, and a later book chapter A unification of component analysis methods, 2013; he presents the same argument but does not give any references either. This material is also covered in the textbook Modern Multivariate Statistical Techniques (2008) by Izenman, who introduced RRR back in 1975.
The relationship between LDA and CCA apparently goes back to Bartlett, 1938, Further aspects of the theory of multiple regression -- that's the reference I often encounter (but did not verify). The relationship between CCA and RRR is described in the Izenman, 1975, Reduced-rank regression for the multivariate linear model. So all of these ideas have been around for a while.
|
What is the relationship between regression and linear discriminant analysis (LDA)?
The purpose of this answer is to explain the exact mathematical relationship between linear discriminant analysis (LDA) and multivariate linear regression (MLR). It will turn out that the correct fram
|
9,228
|
What is the relationship between regression and linear discriminant analysis (LDA)?
|
Linear regression and linear discriminant analysis are very different. Linear regression relates a dependent variable to a set of independent predictor variables. The idea is to find a function linear in the parameters that best fits the data. It does not even have to be linear in the covariates. Linear discriminant analysis on the other hand is a procedure for classifying objects into categories. For the two-class problem it seeks to find the best separating hyperplane for dividing the groups into two catgories. Here best means that it minimizes a loss function that is a linear combination of the error rates. For three or more groups it finds the best set of hyperplanes (k-1 for the k class problem). In discriminant analysis the hypoerplanes are linear in the feature variables.
The main similarity between the two is term linear in the titles.
|
What is the relationship between regression and linear discriminant analysis (LDA)?
|
Linear regression and linear discriminant analysis are very different. Linear regression relates a dependent variable to a set of independent predictor variables. The idea is to find a function line
|
What is the relationship between regression and linear discriminant analysis (LDA)?
Linear regression and linear discriminant analysis are very different. Linear regression relates a dependent variable to a set of independent predictor variables. The idea is to find a function linear in the parameters that best fits the data. It does not even have to be linear in the covariates. Linear discriminant analysis on the other hand is a procedure for classifying objects into categories. For the two-class problem it seeks to find the best separating hyperplane for dividing the groups into two catgories. Here best means that it minimizes a loss function that is a linear combination of the error rates. For three or more groups it finds the best set of hyperplanes (k-1 for the k class problem). In discriminant analysis the hypoerplanes are linear in the feature variables.
The main similarity between the two is term linear in the titles.
|
What is the relationship between regression and linear discriminant analysis (LDA)?
Linear regression and linear discriminant analysis are very different. Linear regression relates a dependent variable to a set of independent predictor variables. The idea is to find a function line
|
9,229
|
Understanding complete separation for logistic regression [duplicate]
|
Here's a visual explanation of (1)
Imagine that you have a perfectly separated set of points, with the separation occuring at zero in the picture (so a clump of $y=0$s to the left of zero and a clump of $y=1$s to the right).
The sequence of curves I plotted is
$$\frac{1}{1 + e^{-x}}, \frac{1}{1 + e^{-2x}}, \frac{1}{1 + e^{-3x}}, \ldots $$
so I'm just increasing the coefficient without bound.
Which of the 20 curves would you choose? Each one hewes ever closer to our imagined data. Would you keep going on to
$$\frac{1}{1 + e^{-21x}}$$
When would you stop?
For (2), yes. This is essentially by definition, you've implicitly assumed this in the construction of the binomial likelihood(*)
$$ L = \sum_i t_i \log(p_i) + (1 - t_i) \log(1 - p_i) $$
In each term in the summation only one of $t_i \log(p_i)$ or $(1 - t_i) \log(1 - p_i)$ is non-zero, with a contribution of $p_i$ for $t_i = 1$ and $1 - p_i$ for $t_i = 0$.
Why is there no convergence mathematically?
Here's a (more) formal mathematical proof.
First some setup and notations. Let's write
$$ S(\beta, x) = \frac{1}{1 + \exp(- \beta x)} $$
for the sigmoid function. We will need the two properties
$$ \lim_{\beta \rightarrow \infty} S(\beta, x) = 0 \ \text{for} \ x < 0 $$
$$ \lim_{\beta \rightarrow \infty} S(\beta, x) = 1 \ \text{for} \ x > 0 $$
with each approaching the limit monotonically, the first limit is decreasing, the second is increasing. Each of these follows easily from the formula for $S$.
Let's also arrange things so that
Our data is centered, this allows us to ignore the intercept as it is zero.
The vertical line $x = 0$ separates our two classes.
Now, the function that we are maximizing in logistic regression is
$$ L(\beta) = \sum_i y_i \log(S(\beta, x_i)) + (1 - y_i) \log(1 - S(\beta, x_i)) $$
This summation has two types of terms. Terms in which $y_i = 0$, look like $\log(1 - S(\beta, x_i))$, and because of the perfect separation we know that for these terms $x_i < 0$. By the first limit above, this means that
$$ \lim_{\beta \rightarrow \infty} S(\beta, x_i) = 0$$
for every $x_i$ associated with a $y_i = 0$. Then, after applying the logarithm, we get the monotonic increasing limit towards zero:
$$ \lim_{\beta \rightarrow \infty} \log(1 - S(\beta, x_i)) = 0$$
You can easily use the same ideas to show that for the other type of terms
$$ \lim_{\beta \rightarrow \infty} \log(S(\beta, x_i)) = 0$$
again, the limit is a monotone increase.
So no matter what $\beta$ is, you can always drive the objective function upwards by increasing $\beta$ towards infinity. So the objective function has no maximum, and attempting to find one iteratively will just increase $\beta$ forever.
It's worth noting where we used the separation. If we could not find a separator then we could not partition the terms into two groups, we would instead have four types
Terms with $y_i = 0$ and $x_i > 0$
Terms with $y_i = 0$ and $x_i < 0$
Terms with $y_i = 1$ and $x_i > 0$
Terms with $y_i = 1$ and $x_i < 0$
In this case, when $\beta$ gets very large the terms with $y_i = 1$ and $x_i < 0$ will drive $\log(S(\beta, x_i))$ to negative infinity. When $\beta$ gets very large, the $y_i = 0$ and $x_i < 0$ will do the same to the corresponding $\log(1 - S(\beta, x_i))$. So somewhere in the middle, there must be a maximum.
(*) I replaced your $y_i$ with $p_i$ because the number is a probability, and calling it $p_i$ makes it easier to reason about the situation.
|
Understanding complete separation for logistic regression [duplicate]
|
Here's a visual explanation of (1)
Imagine that you have a perfectly separated set of points, with the separation occuring at zero in the picture (so a clump of $y=0$s to the left of zero and a clump
|
Understanding complete separation for logistic regression [duplicate]
Here's a visual explanation of (1)
Imagine that you have a perfectly separated set of points, with the separation occuring at zero in the picture (so a clump of $y=0$s to the left of zero and a clump of $y=1$s to the right).
The sequence of curves I plotted is
$$\frac{1}{1 + e^{-x}}, \frac{1}{1 + e^{-2x}}, \frac{1}{1 + e^{-3x}}, \ldots $$
so I'm just increasing the coefficient without bound.
Which of the 20 curves would you choose? Each one hewes ever closer to our imagined data. Would you keep going on to
$$\frac{1}{1 + e^{-21x}}$$
When would you stop?
For (2), yes. This is essentially by definition, you've implicitly assumed this in the construction of the binomial likelihood(*)
$$ L = \sum_i t_i \log(p_i) + (1 - t_i) \log(1 - p_i) $$
In each term in the summation only one of $t_i \log(p_i)$ or $(1 - t_i) \log(1 - p_i)$ is non-zero, with a contribution of $p_i$ for $t_i = 1$ and $1 - p_i$ for $t_i = 0$.
Why is there no convergence mathematically?
Here's a (more) formal mathematical proof.
First some setup and notations. Let's write
$$ S(\beta, x) = \frac{1}{1 + \exp(- \beta x)} $$
for the sigmoid function. We will need the two properties
$$ \lim_{\beta \rightarrow \infty} S(\beta, x) = 0 \ \text{for} \ x < 0 $$
$$ \lim_{\beta \rightarrow \infty} S(\beta, x) = 1 \ \text{for} \ x > 0 $$
with each approaching the limit monotonically, the first limit is decreasing, the second is increasing. Each of these follows easily from the formula for $S$.
Let's also arrange things so that
Our data is centered, this allows us to ignore the intercept as it is zero.
The vertical line $x = 0$ separates our two classes.
Now, the function that we are maximizing in logistic regression is
$$ L(\beta) = \sum_i y_i \log(S(\beta, x_i)) + (1 - y_i) \log(1 - S(\beta, x_i)) $$
This summation has two types of terms. Terms in which $y_i = 0$, look like $\log(1 - S(\beta, x_i))$, and because of the perfect separation we know that for these terms $x_i < 0$. By the first limit above, this means that
$$ \lim_{\beta \rightarrow \infty} S(\beta, x_i) = 0$$
for every $x_i$ associated with a $y_i = 0$. Then, after applying the logarithm, we get the monotonic increasing limit towards zero:
$$ \lim_{\beta \rightarrow \infty} \log(1 - S(\beta, x_i)) = 0$$
You can easily use the same ideas to show that for the other type of terms
$$ \lim_{\beta \rightarrow \infty} \log(S(\beta, x_i)) = 0$$
again, the limit is a monotone increase.
So no matter what $\beta$ is, you can always drive the objective function upwards by increasing $\beta$ towards infinity. So the objective function has no maximum, and attempting to find one iteratively will just increase $\beta$ forever.
It's worth noting where we used the separation. If we could not find a separator then we could not partition the terms into two groups, we would instead have four types
Terms with $y_i = 0$ and $x_i > 0$
Terms with $y_i = 0$ and $x_i < 0$
Terms with $y_i = 1$ and $x_i > 0$
Terms with $y_i = 1$ and $x_i < 0$
In this case, when $\beta$ gets very large the terms with $y_i = 1$ and $x_i < 0$ will drive $\log(S(\beta, x_i))$ to negative infinity. When $\beta$ gets very large, the $y_i = 0$ and $x_i < 0$ will do the same to the corresponding $\log(1 - S(\beta, x_i))$. So somewhere in the middle, there must be a maximum.
(*) I replaced your $y_i$ with $p_i$ because the number is a probability, and calling it $p_i$ makes it easier to reason about the situation.
|
Understanding complete separation for logistic regression [duplicate]
Here's a visual explanation of (1)
Imagine that you have a perfectly separated set of points, with the separation occuring at zero in the picture (so a clump of $y=0$s to the left of zero and a clump
|
9,230
|
Is correlation equivalent to association?
|
No; correlation is not equivalent to association. However, the meaning of correlation is dependent upon context.
The classical statistics definition is, to quote from Kotz and Johnson's Encyclopedia of Statistical Sciences "a measure of the strength of of the linear relationship between two random variables". In mathematical statistics "correlation" seems to generally have this interpretation.
In applied areas where data is commonly ordinal rather than numeric (e.g., psychometrics and market research) this definition is not so helpful as the concept of linearity assumes data that has interval-scale properties. Consequently, in these fields correlation is instead interpreted as indicating a monotonically increasing or decreasing bivariate pattern or, a correlation of the ranks. A number of non-parametric correlation statistics have been developed specifically for this (e.g., Spearman's correlation and Kendall's tau-b). These are sometimes referred to as "non-linear correlations" because they are correlation statistics that do not assume linearity.
Amongst non-statisticians correlation often means association (sometimes with and sometimes without a causal connotation). Irrespective of the etymology of correlation, the reality is that amongst non-statisticians it has this broader meaning and no amount of chastising them for inappropriate usage is likely to change this. I have done a "google" and it seems that some of the uses of non-linear correlation seem to be of this kind (in particular, it seems that some people use the term to denote a smoothish non-linear relationship between numeric variables).
The context-dependent nature of the term "non-linear correlation" perhaps means it is ambiguous and should not be used. As regards "correlation", you need to work out the context of the person using the term in order to know what they mean.
|
Is correlation equivalent to association?
|
No; correlation is not equivalent to association. However, the meaning of correlation is dependent upon context.
The classical statistics definition is, to quote from Kotz and Johnson's Encyclopedi
|
Is correlation equivalent to association?
No; correlation is not equivalent to association. However, the meaning of correlation is dependent upon context.
The classical statistics definition is, to quote from Kotz and Johnson's Encyclopedia of Statistical Sciences "a measure of the strength of of the linear relationship between two random variables". In mathematical statistics "correlation" seems to generally have this interpretation.
In applied areas where data is commonly ordinal rather than numeric (e.g., psychometrics and market research) this definition is not so helpful as the concept of linearity assumes data that has interval-scale properties. Consequently, in these fields correlation is instead interpreted as indicating a monotonically increasing or decreasing bivariate pattern or, a correlation of the ranks. A number of non-parametric correlation statistics have been developed specifically for this (e.g., Spearman's correlation and Kendall's tau-b). These are sometimes referred to as "non-linear correlations" because they are correlation statistics that do not assume linearity.
Amongst non-statisticians correlation often means association (sometimes with and sometimes without a causal connotation). Irrespective of the etymology of correlation, the reality is that amongst non-statisticians it has this broader meaning and no amount of chastising them for inappropriate usage is likely to change this. I have done a "google" and it seems that some of the uses of non-linear correlation seem to be of this kind (in particular, it seems that some people use the term to denote a smoothish non-linear relationship between numeric variables).
The context-dependent nature of the term "non-linear correlation" perhaps means it is ambiguous and should not be used. As regards "correlation", you need to work out the context of the person using the term in order to know what they mean.
|
Is correlation equivalent to association?
No; correlation is not equivalent to association. However, the meaning of correlation is dependent upon context.
The classical statistics definition is, to quote from Kotz and Johnson's Encyclopedi
|
9,231
|
Is correlation equivalent to association?
|
I don't see much point in trying to disentangle the terms "correlation" and "association." After all, Pearson himself (and others) developed a measure of nonlinear relationship which they named the "correlation ratio."
|
Is correlation equivalent to association?
|
I don't see much point in trying to disentangle the terms "correlation" and "association." After all, Pearson himself (and others) developed a measure of nonlinear relationship which they named the "
|
Is correlation equivalent to association?
I don't see much point in trying to disentangle the terms "correlation" and "association." After all, Pearson himself (and others) developed a measure of nonlinear relationship which they named the "correlation ratio."
|
Is correlation equivalent to association?
I don't see much point in trying to disentangle the terms "correlation" and "association." After all, Pearson himself (and others) developed a measure of nonlinear relationship which they named the "
|
9,232
|
Is correlation equivalent to association?
|
There seems to be misunderstanding of association. Measures of association (effect size) are inherent in quantitative analysis, not qualitative.
|
Is correlation equivalent to association?
|
There seems to be misunderstanding of association. Measures of association (effect size) are inherent in quantitative analysis, not qualitative.
|
Is correlation equivalent to association?
There seems to be misunderstanding of association. Measures of association (effect size) are inherent in quantitative analysis, not qualitative.
|
Is correlation equivalent to association?
There seems to be misunderstanding of association. Measures of association (effect size) are inherent in quantitative analysis, not qualitative.
|
9,233
|
Is correlation equivalent to association?
|
I would say that correlation applies to quantitative data and association to qualitative data and both have no obligatory causal relationship.
|
Is correlation equivalent to association?
|
I would say that correlation applies to quantitative data and association to qualitative data and both have no obligatory causal relationship.
|
Is correlation equivalent to association?
I would say that correlation applies to quantitative data and association to qualitative data and both have no obligatory causal relationship.
|
Is correlation equivalent to association?
I would say that correlation applies to quantitative data and association to qualitative data and both have no obligatory causal relationship.
|
9,234
|
Is correlation equivalent to association?
|
The idea that the weight (of a man) is not correlated with the height (because the corresponding function is of 3rd degree, not linear) seems very strange to me. Linear correlation should be treated as a special case of association.
|
Is correlation equivalent to association?
|
The idea that the weight (of a man) is not correlated with the height (because the corresponding function is of 3rd degree, not linear) seems very strange to me. Linear correlation should be treated a
|
Is correlation equivalent to association?
The idea that the weight (of a man) is not correlated with the height (because the corresponding function is of 3rd degree, not linear) seems very strange to me. Linear correlation should be treated as a special case of association.
|
Is correlation equivalent to association?
The idea that the weight (of a man) is not correlated with the height (because the corresponding function is of 3rd degree, not linear) seems very strange to me. Linear correlation should be treated a
|
9,235
|
Is correlation equivalent to association?
|
Correlation and association are different. Correlation describes the three types of relationship positive, negative and non-correlated. It also describe the magnitude of correlation from 0 to 1, from -1 to 0. The association does not reveals what types of association and how much association.
|
Is correlation equivalent to association?
|
Correlation and association are different. Correlation describes the three types of relationship positive, negative and non-correlated. It also describe the magnitude of correlation from 0 to 1, from
|
Is correlation equivalent to association?
Correlation and association are different. Correlation describes the three types of relationship positive, negative and non-correlated. It also describe the magnitude of correlation from 0 to 1, from -1 to 0. The association does not reveals what types of association and how much association.
|
Is correlation equivalent to association?
Correlation and association are different. Correlation describes the three types of relationship positive, negative and non-correlated. It also describe the magnitude of correlation from 0 to 1, from
|
9,236
|
Is correlation equivalent to association?
|
As far as the linearity is concerned the response by Tim and Nick Cox covered it completely. Where I thought I might be able to contribute is a clean way to think about the difference between association and correlation.
Association --- measures how closely related two variables are (i.e. whether they are dependent or independent).
Correlation --- measures in what way two variables are related (i.e. positive or negative).
In the end, I would argue that you can never go wrong treating them distinctly it will help with interpretion and analyses in the long run. Hope this helps.
|
Is correlation equivalent to association?
|
As far as the linearity is concerned the response by Tim and Nick Cox covered it completely. Where I thought I might be able to contribute is a clean way to think about the difference between associat
|
Is correlation equivalent to association?
As far as the linearity is concerned the response by Tim and Nick Cox covered it completely. Where I thought I might be able to contribute is a clean way to think about the difference between association and correlation.
Association --- measures how closely related two variables are (i.e. whether they are dependent or independent).
Correlation --- measures in what way two variables are related (i.e. positive or negative).
In the end, I would argue that you can never go wrong treating them distinctly it will help with interpretion and analyses in the long run. Hope this helps.
|
Is correlation equivalent to association?
As far as the linearity is concerned the response by Tim and Nick Cox covered it completely. Where I thought I might be able to contribute is a clean way to think about the difference between associat
|
9,237
|
Number of features vs. number of observations
|
What you've hit on here is the curse of dimensionality or the p>>n problem (where p is predictors and n is observations). There have been many techniques developed over the years to solve this problem. You can use AIC or BIC to penalize models with more predictors. You can choose random sets of variables and asses their importance using cross-validation. You can use ridge-regression, the lasso, or the elastic net for regularization. Or you can choose a technique, such as a support vector machine or random forest that deals well with a large number of predictors.
Honestly, the solution depends on the specific nature of the problem you are trying to solve.
|
Number of features vs. number of observations
|
What you've hit on here is the curse of dimensionality or the p>>n problem (where p is predictors and n is observations). There have been many techniques developed over the years to solve this proble
|
Number of features vs. number of observations
What you've hit on here is the curse of dimensionality or the p>>n problem (where p is predictors and n is observations). There have been many techniques developed over the years to solve this problem. You can use AIC or BIC to penalize models with more predictors. You can choose random sets of variables and asses their importance using cross-validation. You can use ridge-regression, the lasso, or the elastic net for regularization. Or you can choose a technique, such as a support vector machine or random forest that deals well with a large number of predictors.
Honestly, the solution depends on the specific nature of the problem you are trying to solve.
|
Number of features vs. number of observations
What you've hit on here is the curse of dimensionality or the p>>n problem (where p is predictors and n is observations). There have been many techniques developed over the years to solve this proble
|
9,238
|
Number of features vs. number of observations
|
I suspect that no such rules of thumb will be generally applicable. Consider a problem with two gaussian classes centered on $\vec{+1}$ and $\vec{-1}$, both with covariance matrix of $0.000001*\vec{I}$. In that case, you only need two samples, one from either class to get perfect classification, almost regardless of the number of features. At the other end of the spectrum if both classes are centered on the origin with covariance $\vec{I}$, no amount of training data is going to give you a useful classifier. At the end of the day, the amount of samples you need for a given number of features depends on how the data are distributed, in general, the more features you have, the more data you will need to adequately describe the distribution of the data (exponential in the number of features if you are unlucky - see the curse of dimensionality mentioned by Zach).
If you use regularisation, then in principal, (an upper bound on) the generalisation error is independent of the number of features (see Vapnik's work on the support vector machine). However that leaves the problem of finding a good value for the regularisation parameter (cross-validation is handy).
|
Number of features vs. number of observations
|
I suspect that no such rules of thumb will be generally applicable. Consider a problem with two gaussian classes centered on $\vec{+1}$ and $\vec{-1}$, both with covariance matrix of $0.000001*\vec{I
|
Number of features vs. number of observations
I suspect that no such rules of thumb will be generally applicable. Consider a problem with two gaussian classes centered on $\vec{+1}$ and $\vec{-1}$, both with covariance matrix of $0.000001*\vec{I}$. In that case, you only need two samples, one from either class to get perfect classification, almost regardless of the number of features. At the other end of the spectrum if both classes are centered on the origin with covariance $\vec{I}$, no amount of training data is going to give you a useful classifier. At the end of the day, the amount of samples you need for a given number of features depends on how the data are distributed, in general, the more features you have, the more data you will need to adequately describe the distribution of the data (exponential in the number of features if you are unlucky - see the curse of dimensionality mentioned by Zach).
If you use regularisation, then in principal, (an upper bound on) the generalisation error is independent of the number of features (see Vapnik's work on the support vector machine). However that leaves the problem of finding a good value for the regularisation parameter (cross-validation is handy).
|
Number of features vs. number of observations
I suspect that no such rules of thumb will be generally applicable. Consider a problem with two gaussian classes centered on $\vec{+1}$ and $\vec{-1}$, both with covariance matrix of $0.000001*\vec{I
|
9,239
|
Number of features vs. number of observations
|
You are probably over impression from the classical modelling, which is vulnerable to the Runge paradox-like problems and thus require some parsimony tuning in post-processing.
However, in case of machine learning, the idea of including robustness as an aim of model optimization is just the core of the whole domain (often expressed as accuracy on unseen data). So, well, as long as you know your model works good (for instance from CV) there is probably no point to bother.
The real problem with $p\gg n$ in case of ML are the irrelevant attributes -- mostly because some set of them may become more usable for regenerating decision than the truly relevant ones due to some random fluctuations. Obviously this issue has nothing to do with parsimony, but, same as in classical case, ends up in terrible loss of generalization power. How to solve it is a different story, called feature selection -- but the general idea is to pre-process the data to kick out the noise rather than putting constrains on the model.
|
Number of features vs. number of observations
|
You are probably over impression from the classical modelling, which is vulnerable to the Runge paradox-like problems and thus require some parsimony tuning in post-processing.
However, in case of mac
|
Number of features vs. number of observations
You are probably over impression from the classical modelling, which is vulnerable to the Runge paradox-like problems and thus require some parsimony tuning in post-processing.
However, in case of machine learning, the idea of including robustness as an aim of model optimization is just the core of the whole domain (often expressed as accuracy on unseen data). So, well, as long as you know your model works good (for instance from CV) there is probably no point to bother.
The real problem with $p\gg n$ in case of ML are the irrelevant attributes -- mostly because some set of them may become more usable for regenerating decision than the truly relevant ones due to some random fluctuations. Obviously this issue has nothing to do with parsimony, but, same as in classical case, ends up in terrible loss of generalization power. How to solve it is a different story, called feature selection -- but the general idea is to pre-process the data to kick out the noise rather than putting constrains on the model.
|
Number of features vs. number of observations
You are probably over impression from the classical modelling, which is vulnerable to the Runge paradox-like problems and thus require some parsimony tuning in post-processing.
However, in case of mac
|
9,240
|
Statistical interpretation of Maximum Entropy Distribution
|
This isn't really my field, so some musings:
I will start with the concept of surprise. What does it mean to be surprised?
Usually, it means that something happened that was not expected to happen. So, surprise it a probabilistic concept and can be explicated as such (I J Good has written about that). See also Wikipedia and Bayesian Surprise.
Take the particular case of a yes/no situation, something can happen or not. It happens with probability $p$. Say, if p=0.9 and it happens, you are not really surprised.
If $p=0.05$ and it happens, you are somewhat surprised. And if $p=0.0000001$ and it happens, you are really surprised. So, a natural measure of "surprise value in observed outcome" is some (anti)monotone function of the probability of what happened. It seems natural (and works well ...) to take the logarithm of probability of what happened, and then we throw in a minus sign to get a positive number. Also, by taking the logarithm we concentrate on the order of the surprise, and, in practice, probabilities are often only known up to order, more or less.
So, we define
$$
\text{Surprise}(A) = -\log p(A)
$$
where $A$ is the observed outcome, and $p(A)$ is its probability.
Now we can ask what is the expected surprise. Let $X$ be a Bernoulli random variable with probability $p$. It has two possibly outcomes, 0 and 1. The respective surprise values is
$$\begin{align}
\text{Surprise}(0) &= -\log(1-p) \\
\text{Surprise}(1) &= -\log p \end{align}
$$
so the surprise when observing $X$ is itself a random variable with expectation
$$
p \cdot -\log p + (1-p) \cdot -\log(1-p)
$$
and that is --- surprise! --- the entropy of $X$! So entropy is expected surprise!
Now, this question is about maximum entropy. Why would anybody want to use a maximum entropy distribution? Well, it must be because they want to be maximally surprised! Why would anybody want that?
A way to look at it is the following: You want to learn about something, and to that goal you set up some learning experiences (or experiments ...). If you already knew everything about this topic, you are able to always predict perfectly, so are never surprised. Then you never get new experience, so do not learn anything new (but you know everything already---there is nothing to learn, so that is OK). In the more typical situation that you are confused, not able to predict perfectly, there is a learning opportunity! This leads to the idea that we can measure the "amount of possible learning" by the expected surprise, that is, entropy. So, maximizing entropy is nothing other than maximizing opportunity for learning. That sounds like a useful concept, which could be useful in designing experiments and such things.
A poetic example is the well known
Wenn einer eine reise macht, dann kann er was erzählen ...
One practical example: You want to design a system for online tests (online meaning that not everybody gets the same questions, the questions are chosen dynamically depending on previous answers, so optimized, in some way, for each person).
If you make too difficult questions, so they are never mastered, you learn nothing. That indicates you must lower the difficulty level. What is the optimal difficulty level, that is, the difficulty level which maximizes the rate of learning? Let the probability of correct answer be $p$. We want the value of $p$ that maximizes the Bernoulli entropy. But that is $p=0.5$. So you aim to state questions where the probability of obtaining a correct answer (from that person) is 0.5.
Then the case of a continuous random variable $X$. How can we be surprised by observing $X$? The probability of any particular outcome $\{X=x\}$ is zero, the $-\log p$ definition is useless. But we will be surprised if the probability of observing something like $x$ is small, that is, if the density function value $f(x)$ is small (assuming $f$ is continuous). That leads to the definition
$$ \DeclareMathOperator{\E}{\mathbb{E}}
\text{Surprise}(x) = -\log f(x)
$$
With that definition, the expected surprise from observing $X$ is
$$
\E \{-\log f(X)\} = -\int f(x) \log f(x) \; dx
$$
that is, the expected surprise from observing $X$ is the differential entropy of $X$. It can also be seen as the expected negative loglikelihood.
But this isn't really the same as the first, event, case. Too see that, an example. Let the random variable $X$ represent the length of a throw of a stone (say in a sports competition). To measure that length we need to choose a length unit, since there is no intrinsic scale to length, as there is to probability. We could measure in mm or in km, or more usually, in meters. But our definition of surprise, hence expected surprise, depends on the unit chosen, so there is no invariance. For that reason, the values of differential entropy are not directly comparable the way that Shannon entropy is. It might still be useful, if one remembers this problem.
|
Statistical interpretation of Maximum Entropy Distribution
|
This isn't really my field, so some musings:
I will start with the concept of surprise. What does it mean to be surprised?
Usually, it means that something happened that was not expected to happen
|
Statistical interpretation of Maximum Entropy Distribution
This isn't really my field, so some musings:
I will start with the concept of surprise. What does it mean to be surprised?
Usually, it means that something happened that was not expected to happen. So, surprise it a probabilistic concept and can be explicated as such (I J Good has written about that). See also Wikipedia and Bayesian Surprise.
Take the particular case of a yes/no situation, something can happen or not. It happens with probability $p$. Say, if p=0.9 and it happens, you are not really surprised.
If $p=0.05$ and it happens, you are somewhat surprised. And if $p=0.0000001$ and it happens, you are really surprised. So, a natural measure of "surprise value in observed outcome" is some (anti)monotone function of the probability of what happened. It seems natural (and works well ...) to take the logarithm of probability of what happened, and then we throw in a minus sign to get a positive number. Also, by taking the logarithm we concentrate on the order of the surprise, and, in practice, probabilities are often only known up to order, more or less.
So, we define
$$
\text{Surprise}(A) = -\log p(A)
$$
where $A$ is the observed outcome, and $p(A)$ is its probability.
Now we can ask what is the expected surprise. Let $X$ be a Bernoulli random variable with probability $p$. It has two possibly outcomes, 0 and 1. The respective surprise values is
$$\begin{align}
\text{Surprise}(0) &= -\log(1-p) \\
\text{Surprise}(1) &= -\log p \end{align}
$$
so the surprise when observing $X$ is itself a random variable with expectation
$$
p \cdot -\log p + (1-p) \cdot -\log(1-p)
$$
and that is --- surprise! --- the entropy of $X$! So entropy is expected surprise!
Now, this question is about maximum entropy. Why would anybody want to use a maximum entropy distribution? Well, it must be because they want to be maximally surprised! Why would anybody want that?
A way to look at it is the following: You want to learn about something, and to that goal you set up some learning experiences (or experiments ...). If you already knew everything about this topic, you are able to always predict perfectly, so are never surprised. Then you never get new experience, so do not learn anything new (but you know everything already---there is nothing to learn, so that is OK). In the more typical situation that you are confused, not able to predict perfectly, there is a learning opportunity! This leads to the idea that we can measure the "amount of possible learning" by the expected surprise, that is, entropy. So, maximizing entropy is nothing other than maximizing opportunity for learning. That sounds like a useful concept, which could be useful in designing experiments and such things.
A poetic example is the well known
Wenn einer eine reise macht, dann kann er was erzählen ...
One practical example: You want to design a system for online tests (online meaning that not everybody gets the same questions, the questions are chosen dynamically depending on previous answers, so optimized, in some way, for each person).
If you make too difficult questions, so they are never mastered, you learn nothing. That indicates you must lower the difficulty level. What is the optimal difficulty level, that is, the difficulty level which maximizes the rate of learning? Let the probability of correct answer be $p$. We want the value of $p$ that maximizes the Bernoulli entropy. But that is $p=0.5$. So you aim to state questions where the probability of obtaining a correct answer (from that person) is 0.5.
Then the case of a continuous random variable $X$. How can we be surprised by observing $X$? The probability of any particular outcome $\{X=x\}$ is zero, the $-\log p$ definition is useless. But we will be surprised if the probability of observing something like $x$ is small, that is, if the density function value $f(x)$ is small (assuming $f$ is continuous). That leads to the definition
$$ \DeclareMathOperator{\E}{\mathbb{E}}
\text{Surprise}(x) = -\log f(x)
$$
With that definition, the expected surprise from observing $X$ is
$$
\E \{-\log f(X)\} = -\int f(x) \log f(x) \; dx
$$
that is, the expected surprise from observing $X$ is the differential entropy of $X$. It can also be seen as the expected negative loglikelihood.
But this isn't really the same as the first, event, case. Too see that, an example. Let the random variable $X$ represent the length of a throw of a stone (say in a sports competition). To measure that length we need to choose a length unit, since there is no intrinsic scale to length, as there is to probability. We could measure in mm or in km, or more usually, in meters. But our definition of surprise, hence expected surprise, depends on the unit chosen, so there is no invariance. For that reason, the values of differential entropy are not directly comparable the way that Shannon entropy is. It might still be useful, if one remembers this problem.
|
Statistical interpretation of Maximum Entropy Distribution
This isn't really my field, so some musings:
I will start with the concept of surprise. What does it mean to be surprised?
Usually, it means that something happened that was not expected to happen
|
9,241
|
Statistical interpretation of Maximum Entropy Distribution
|
Perhaps not exactly what you are after, but in Rissanen, J. Stochastic Complexity in Statistical Inquiry, World Scientific, 1989, p. 41 there is an interesting connection of maximum entropy, the normal distribution and the central limit theorem. Among all densities with mean zero and standard deviation $\sigma$, the normal density has maximum entropy.
"Hence, in this interpretation the basic central limit theorem expresses the fact that the per symbol entropy of sums of independent random variables with mean zero and common variance tends to the maximum. This seems eminently reasonable; in fact, it is an expression of the second law of thermodynamics, which Eddington viewed as holding 'the supreme position among the laws of Nature'."
I have not yet explored the implications of this, nor am I sure I fully understand them.
[edit: fixed typo]
|
Statistical interpretation of Maximum Entropy Distribution
|
Perhaps not exactly what you are after, but in Rissanen, J. Stochastic Complexity in Statistical Inquiry, World Scientific, 1989, p. 41 there is an interesting connection of maximum entropy, the norma
|
Statistical interpretation of Maximum Entropy Distribution
Perhaps not exactly what you are after, but in Rissanen, J. Stochastic Complexity in Statistical Inquiry, World Scientific, 1989, p. 41 there is an interesting connection of maximum entropy, the normal distribution and the central limit theorem. Among all densities with mean zero and standard deviation $\sigma$, the normal density has maximum entropy.
"Hence, in this interpretation the basic central limit theorem expresses the fact that the per symbol entropy of sums of independent random variables with mean zero and common variance tends to the maximum. This seems eminently reasonable; in fact, it is an expression of the second law of thermodynamics, which Eddington viewed as holding 'the supreme position among the laws of Nature'."
I have not yet explored the implications of this, nor am I sure I fully understand them.
[edit: fixed typo]
|
Statistical interpretation of Maximum Entropy Distribution
Perhaps not exactly what you are after, but in Rissanen, J. Stochastic Complexity in Statistical Inquiry, World Scientific, 1989, p. 41 there is an interesting connection of maximum entropy, the norma
|
9,242
|
Statistical interpretation of Maximum Entropy Distribution
|
While not an expert in information theory and maximum entropy, I've been interested in it for a while.
The entropy is a measure of the uncertainty of a probability distribution that was derived according to a set of criteria. It and related measures characterize probability distributions. And, it's the unique measure that satisfies those criteria. This is similar to the case of probability itself, which as explained beautifully in Jaynes (2003), is the unique measure that satisfies some very desirable criteria for any measure of uncertainty of logical statements.
Any other measure of the uncertainty of a probability distribution that was different than entropy would have to violate one or more of the criteria used to define entropy (otherwise it would necessarily be entropy). So, if you had some general statement in terms of probability that somehow gave the same results as maximum entropy... then it would be maximum entropy!
The closest thing I can find to a probability statement about maximum entropy distributions so far is Jaynes's concentration theorem. You can find it clearly explained in Kapur and Kesavan (1992). Here is a loose restatement:
We require a discrete probability distribution $p$ on $n$ outcomes. That is, we require $p_i$, $i=1,...,n$. We have $m$ constraints that our probability distribution has to satisfy; additionally, since probabilities must add to 1 we have a total of $m+1$ constraints.
Let $S$ be the entropy of some distribution that satisfies the $m+1$ constraints and let $S_{\textrm{max}}$ be the entropy of the maximum entropy distribution.
As the size of the set of observations $N$ grows, we have
$$2N(S_{\textrm{max}} - S) \sim \chi^2_{n-m-1}.$$
With this, a 95% entropy interval is defined as
$$\left( S_{\textrm{max}} - \frac {\chi^2_{n-m-1} (0.95)}{2N}, S_{\textrm{max}} \right).$$
So, any other distribution that satisfies the same constraints as the maximum entropy distribution has a 95% chance of having entropy greater than $S_{\textrm{max}} - \frac {\chi^2_{n-m-1} (0.95)}{2N}$.
E.T. Jaynes (2003) Probability Theory: The Logic of Science. Cambridge University Press.
J.N. Kapur and .K. Kesavan (1992) Entropy Optimization Principles with Applications. Academic Press, Inc.
|
Statistical interpretation of Maximum Entropy Distribution
|
While not an expert in information theory and maximum entropy, I've been interested in it for a while.
The entropy is a measure of the uncertainty of a probability distribution that was derived acco
|
Statistical interpretation of Maximum Entropy Distribution
While not an expert in information theory and maximum entropy, I've been interested in it for a while.
The entropy is a measure of the uncertainty of a probability distribution that was derived according to a set of criteria. It and related measures characterize probability distributions. And, it's the unique measure that satisfies those criteria. This is similar to the case of probability itself, which as explained beautifully in Jaynes (2003), is the unique measure that satisfies some very desirable criteria for any measure of uncertainty of logical statements.
Any other measure of the uncertainty of a probability distribution that was different than entropy would have to violate one or more of the criteria used to define entropy (otherwise it would necessarily be entropy). So, if you had some general statement in terms of probability that somehow gave the same results as maximum entropy... then it would be maximum entropy!
The closest thing I can find to a probability statement about maximum entropy distributions so far is Jaynes's concentration theorem. You can find it clearly explained in Kapur and Kesavan (1992). Here is a loose restatement:
We require a discrete probability distribution $p$ on $n$ outcomes. That is, we require $p_i$, $i=1,...,n$. We have $m$ constraints that our probability distribution has to satisfy; additionally, since probabilities must add to 1 we have a total of $m+1$ constraints.
Let $S$ be the entropy of some distribution that satisfies the $m+1$ constraints and let $S_{\textrm{max}}$ be the entropy of the maximum entropy distribution.
As the size of the set of observations $N$ grows, we have
$$2N(S_{\textrm{max}} - S) \sim \chi^2_{n-m-1}.$$
With this, a 95% entropy interval is defined as
$$\left( S_{\textrm{max}} - \frac {\chi^2_{n-m-1} (0.95)}{2N}, S_{\textrm{max}} \right).$$
So, any other distribution that satisfies the same constraints as the maximum entropy distribution has a 95% chance of having entropy greater than $S_{\textrm{max}} - \frac {\chi^2_{n-m-1} (0.95)}{2N}$.
E.T. Jaynes (2003) Probability Theory: The Logic of Science. Cambridge University Press.
J.N. Kapur and .K. Kesavan (1992) Entropy Optimization Principles with Applications. Academic Press, Inc.
|
Statistical interpretation of Maximum Entropy Distribution
While not an expert in information theory and maximum entropy, I've been interested in it for a while.
The entropy is a measure of the uncertainty of a probability distribution that was derived acco
|
9,243
|
Statistical interpretation of Maximum Entropy Distribution
|
You might want to have a look at the Wallis derivation.
https://en.wikipedia.org/wiki/Principle_of_maximum_entropy#The_Wallis_derivation
It has the advantage of being strictly combinatorial in nature, making no reference to information entropy as a measure of 'uncertainty', 'uninformativeness', or any other imprecisely defined concept.
The wikipedia page is excellent, but let me add a simple example to illustrate the idea.
Suppose you have a dice. If the dice is fair, the average value of the number shown will be 3.5. Now, imagine to have a dice for which the average value shown is a bit higher, let's say 4.
How can it do that? Well, it could do it in zillion ways! It could for example show 4 every single time. Or it could show 3, 4, 5 with equal probability.
Let's say you want to write a computer program that simulates a dice with average 4. How would you do it?
An interesting solution is this.
You start with a fair dice. You roll it many times (say 100) and you get a bunch of numbers. If the average of these numbers is 4, you accept the sample. Otherwise you reject it and try again.
After many many attempts, you finally get a sample with average 4. Now your computer program will simply return a number randomly chosen from this sample.
Which numbers will it show?
Well, for example, you expect 1 to be present a little bit, but probably not 1/6 of the times, because a 1 will lower that average of the sample and it will increase the probability of the sample to be rejected.
In the limit of a very big sample, the numbers will be distributed according to this:
https://en.wikipedia.org/wiki/Maximum_entropy_probability_distribution#Discrete_distributions_with_specified_mean
which is the distribution with maximum entropy among the ones with specified mean.
Aha!
|
Statistical interpretation of Maximum Entropy Distribution
|
You might want to have a look at the Wallis derivation.
https://en.wikipedia.org/wiki/Principle_of_maximum_entropy#The_Wallis_derivation
It has the advantage of being strictly combinatorial in nature
|
Statistical interpretation of Maximum Entropy Distribution
You might want to have a look at the Wallis derivation.
https://en.wikipedia.org/wiki/Principle_of_maximum_entropy#The_Wallis_derivation
It has the advantage of being strictly combinatorial in nature, making no reference to information entropy as a measure of 'uncertainty', 'uninformativeness', or any other imprecisely defined concept.
The wikipedia page is excellent, but let me add a simple example to illustrate the idea.
Suppose you have a dice. If the dice is fair, the average value of the number shown will be 3.5. Now, imagine to have a dice for which the average value shown is a bit higher, let's say 4.
How can it do that? Well, it could do it in zillion ways! It could for example show 4 every single time. Or it could show 3, 4, 5 with equal probability.
Let's say you want to write a computer program that simulates a dice with average 4. How would you do it?
An interesting solution is this.
You start with a fair dice. You roll it many times (say 100) and you get a bunch of numbers. If the average of these numbers is 4, you accept the sample. Otherwise you reject it and try again.
After many many attempts, you finally get a sample with average 4. Now your computer program will simply return a number randomly chosen from this sample.
Which numbers will it show?
Well, for example, you expect 1 to be present a little bit, but probably not 1/6 of the times, because a 1 will lower that average of the sample and it will increase the probability of the sample to be rejected.
In the limit of a very big sample, the numbers will be distributed according to this:
https://en.wikipedia.org/wiki/Maximum_entropy_probability_distribution#Discrete_distributions_with_specified_mean
which is the distribution with maximum entropy among the ones with specified mean.
Aha!
|
Statistical interpretation of Maximum Entropy Distribution
You might want to have a look at the Wallis derivation.
https://en.wikipedia.org/wiki/Principle_of_maximum_entropy#The_Wallis_derivation
It has the advantage of being strictly combinatorial in nature
|
9,244
|
R packages for performing topic modeling / LDA: just `topicmodels` and `lda` [closed]
|
Implementation:
The topicmodels package provides an interface to the GSL C and C++ code for topic models by Blei et al. and Phan et al. For the earlier it uses Variational EM, for the latter Gibbs Sampling. See http://www.jstatsoft.org/v40/i13/paper . The package works well with the utilities from the tm package.
The lda package uses a collapsed Gibbs Sampler for a number of models similar to those from the GSL library. However, it has been implemented by the package authors itself, not by Blei et al. This implementation therefore differs in general from the estimation technique proposed in the original papers introducing these model variants, where the VEM algorithm is usually applied. On the other hand, the package offers more functionality then the other package.
The package provides text mining functionality too.
Extensibility:
Regarding extensibility, the topicmodel code by its very nature can be extended to interface other topic model code written in C and C++. The lda package seems to be more relying on the specific implementation provided by the authors, but there Gibbs sampler might allow specifying your own topic model. For extensibility issues nota bene, the former is licensed under GPL-2 and the latter LGPL, so it might depend on what you need to extend it for (GPL-2 is stricter regarding the open source aspect, i.e. you can't use it in proprietary software).
Performance:
I can't help you here, I only used topicmodels so far.
Conclusion:
Personally I use topicmodels, as it is well documented (see the JSS paper above) and I trust the authors (Grün also implemeted flexmix and Hornik is R core member).
|
R packages for performing topic modeling / LDA: just `topicmodels` and `lda` [closed]
|
Implementation:
The topicmodels package provides an interface to the GSL C and C++ code for topic models by Blei et al. and Phan et al. For the earlier it uses Variational EM, for the latter Gibbs Sam
|
R packages for performing topic modeling / LDA: just `topicmodels` and `lda` [closed]
Implementation:
The topicmodels package provides an interface to the GSL C and C++ code for topic models by Blei et al. and Phan et al. For the earlier it uses Variational EM, for the latter Gibbs Sampling. See http://www.jstatsoft.org/v40/i13/paper . The package works well with the utilities from the tm package.
The lda package uses a collapsed Gibbs Sampler for a number of models similar to those from the GSL library. However, it has been implemented by the package authors itself, not by Blei et al. This implementation therefore differs in general from the estimation technique proposed in the original papers introducing these model variants, where the VEM algorithm is usually applied. On the other hand, the package offers more functionality then the other package.
The package provides text mining functionality too.
Extensibility:
Regarding extensibility, the topicmodel code by its very nature can be extended to interface other topic model code written in C and C++. The lda package seems to be more relying on the specific implementation provided by the authors, but there Gibbs sampler might allow specifying your own topic model. For extensibility issues nota bene, the former is licensed under GPL-2 and the latter LGPL, so it might depend on what you need to extend it for (GPL-2 is stricter regarding the open source aspect, i.e. you can't use it in proprietary software).
Performance:
I can't help you here, I only used topicmodels so far.
Conclusion:
Personally I use topicmodels, as it is well documented (see the JSS paper above) and I trust the authors (Grün also implemeted flexmix and Hornik is R core member).
|
R packages for performing topic modeling / LDA: just `topicmodels` and `lda` [closed]
Implementation:
The topicmodels package provides an interface to the GSL C and C++ code for topic models by Blei et al. and Phan et al. For the earlier it uses Variational EM, for the latter Gibbs Sam
|
9,245
|
R packages for performing topic modeling / LDA: just `topicmodels` and `lda` [closed]
|
+1 for topicmodels. @Momo's answer is very comprehensive. I'd just add that topicmodels takes input as document term matrices, which are easily made with the tm package or using python. The lda package uses a more esoteric form of input (based on Blei's LDA-C) and I've had no luck using the built-in functions to convert dtm into the lda package format (the lda documentation is very poor, as Momo notes).
I have some code that starts with raw text, pre-processes it in tm and puts it through topicmodels (including finding the optimum number of topics in advance and working with the output) here. Could be useful to someone coming to topicmodels for the first time.
|
R packages for performing topic modeling / LDA: just `topicmodels` and `lda` [closed]
|
+1 for topicmodels. @Momo's answer is very comprehensive. I'd just add that topicmodels takes input as document term matrices, which are easily made with the tm package or using python. The lda packag
|
R packages for performing topic modeling / LDA: just `topicmodels` and `lda` [closed]
+1 for topicmodels. @Momo's answer is very comprehensive. I'd just add that topicmodels takes input as document term matrices, which are easily made with the tm package or using python. The lda package uses a more esoteric form of input (based on Blei's LDA-C) and I've had no luck using the built-in functions to convert dtm into the lda package format (the lda documentation is very poor, as Momo notes).
I have some code that starts with raw text, pre-processes it in tm and puts it through topicmodels (including finding the optimum number of topics in advance and working with the output) here. Could be useful to someone coming to topicmodels for the first time.
|
R packages for performing topic modeling / LDA: just `topicmodels` and `lda` [closed]
+1 for topicmodels. @Momo's answer is very comprehensive. I'd just add that topicmodels takes input as document term matrices, which are easily made with the tm package or using python. The lda packag
|
9,246
|
R packages for performing topic modeling / LDA: just `topicmodels` and `lda` [closed]
|
The R Structural Topic Model (STM) package by Molly Roberts, Brandon Stewart and Dustin Tingley is also a great choice. Built on top of the tm package it's a general framework for topic modeling with document-level covariate information.
http://structuraltopicmodel.com/
The STM package includes a series of methods (grid search) and measures (semantic coherence, residuals and exclusivity) to determine the number of topics. Setting the number of topics to 0 will also let the model determine an optimum number of topics.
The stmBrowser package is a great data visualization complement to visualize the influence of external variables on topics. See this example related to the 2016 presidential debates: http://alexperrier.github.io/stm-visualization/index.html.
|
R packages for performing topic modeling / LDA: just `topicmodels` and `lda` [closed]
|
The R Structural Topic Model (STM) package by Molly Roberts, Brandon Stewart and Dustin Tingley is also a great choice. Built on top of the tm package it's a general framework for topic modeling wit
|
R packages for performing topic modeling / LDA: just `topicmodels` and `lda` [closed]
The R Structural Topic Model (STM) package by Molly Roberts, Brandon Stewart and Dustin Tingley is also a great choice. Built on top of the tm package it's a general framework for topic modeling with document-level covariate information.
http://structuraltopicmodel.com/
The STM package includes a series of methods (grid search) and measures (semantic coherence, residuals and exclusivity) to determine the number of topics. Setting the number of topics to 0 will also let the model determine an optimum number of topics.
The stmBrowser package is a great data visualization complement to visualize the influence of external variables on topics. See this example related to the 2016 presidential debates: http://alexperrier.github.io/stm-visualization/index.html.
|
R packages for performing topic modeling / LDA: just `topicmodels` and `lda` [closed]
The R Structural Topic Model (STM) package by Molly Roberts, Brandon Stewart and Dustin Tingley is also a great choice. Built on top of the tm package it's a general framework for topic modeling wit
|
9,247
|
R packages for performing topic modeling / LDA: just `topicmodels` and `lda` [closed]
|
I used all three libraries, among all 3 viz., topicmodels, lda, stm; not everyone works with n grams. The topicmodels library is good with its estimation and it also work with n grams. But if anyone is working with uni grams then the practitioner may preferred stm as it gives structured output.
|
R packages for performing topic modeling / LDA: just `topicmodels` and `lda` [closed]
|
I used all three libraries, among all 3 viz., topicmodels, lda, stm; not everyone works with n grams. The topicmodels library is good with its estimation and it also work with n grams. But if anyone i
|
R packages for performing topic modeling / LDA: just `topicmodels` and `lda` [closed]
I used all three libraries, among all 3 viz., topicmodels, lda, stm; not everyone works with n grams. The topicmodels library is good with its estimation and it also work with n grams. But if anyone is working with uni grams then the practitioner may preferred stm as it gives structured output.
|
R packages for performing topic modeling / LDA: just `topicmodels` and `lda` [closed]
I used all three libraries, among all 3 viz., topicmodels, lda, stm; not everyone works with n grams. The topicmodels library is good with its estimation and it also work with n grams. But if anyone i
|
9,248
|
Correcting p values for multiple tests where tests are correlated (genetics)
|
This is actually a hot topic in Genomewide analysis studies (GWAS)! I am not sure the method you are thinking of is the most appropriate in this context. Pooling of p-values was described by some authors, but in a different context (replication studies or meta-analysis, see e.g. (1) for a recent review). Combining SNP p-values by Fisher's method is generally used when one wants to derive an unique p-value for a given gene; this allows to work at the gene level, and reduce the amount of dimensionality of subsequent testing, but as you said the non-independence between markers (arising from spatial colocation or linkage disiquilibrium, LD) introduce a bias. More powerful alternatives rely on resampling procedures, for example the use of maxT statistics for combining p-value and working at the gene level or when one is interested in pathway-based approaches, see e.g. (2) (§2.4 p. 93 provides details on their approach).
My main concerns with bootstraping (with replacement) would be that you are introducing an artificial form of relatedness, or in other words you create virtual twins, hence altering Hardy-Weinberg equilibrium (but also minimum allele frequency and call rate). This would not be the case with a permutation approach where you permute individual labels and keep the genotyping data as is. Usually, the plink software can give you raw and permuted p-values, although it uses (by default) an adaptive testing strategy with a sliding window that allows to stop running all permutations (say 1000 per SNP) if it appears that the SNP under consideration is not "interesting"; it also has option for computing maxT, see the online help.
But given the low number of SNPs you are considering, I would suggest relying on FDR-based or maxT tests as implemented in the multtest R package (see mt.maxT), but the definitive guide to resampling strategies for genomic application is Multiple Testing Procedures with Applications to Genomics, from Dudoit & van der Laan (Springer, 2008). See also Andrea Foulkes's book on genetics with R, which is reviewed in the JSS. She has great material on multiple testing procedures.
Further Notes
Many authors have pointed to the fact that simple multiple testing correcting methods such as the Bonferroni or Sidak are too stringent for adjusting the results for the individual SNPs. Moreover, neither of these methods take into account the correlation that exists between SNPs due to LD which tags the genetic variation across gene regions. Other alternative have bee proposed, like a derivative of Holm's method for multiple comparison (3), Hidden Markov Model (4), conditional or positive FDR (5) or derivative thereof (6), to name a few. So-called gap statistics or sliding window have been proved successful in some case, but you'll find a good review in (7) and (8).
I've also heard of methods that make effective use of the haplotype structure or LD, e.g. (9), but I never used them. They seem, however, more related to estimating the correlation between markers, not p-value as you meant. But in fact, you might better think in terms of the dependency structure between successive test statistics, than between correlated p-values.
References
Cantor, RM, Lange, K and Sinsheimer, JS. Prioritizing GWAS Results: A Review of Statistical Methods and Recommendations for Their Application. Am J Hum Genet. 2010 86(1): 6–22.
Corley, RP, Zeiger, JS, Crowley, T et al. Association of candidate genes with antisocial drug dependence in adolescents. Drug and Alcohol Dependence 2008 96: 90–98.
Dalmasso, C, Génin, E and Trégouet DA. A Weighted-Holm Procedure Accounting for Allele Frequencies in Genomewide Association Studies. Genetics 2008 180(1): 697–702.
Wei, Z, Sun, W, Wang, K, and Hakonarson, H. Multiple Testing in Genome-Wide Association Studies via Hidden Markov Models. Bioinformatics 2009 25(21): 2802-2808.
Broberg, P. A comparative review of estimates of the proportion unchanged genes and the false discovery rate. BMC Bioinformatics 2005 6: 199.
Need, AC, Ge, D, Weale, ME, et a. A Genome-Wide Investigation of SNPs and CNVs in Schizophrenia. PLoS Genet. 2009 5(2): e1000373.
Han, B, Kang, HM, and Eskin, E. Rapid and Accurate Multiple Testing Correction and Power Estimation for Millions of Correlated Markers. PLoS Genetics 2009
Liang, Y and Kelemen, A. Statistical advances and challenges for analyzing correlated high dimensional snp data in genomic study for complex diseases. Statistics Surveys 2008 2 :43–60. -- the best recent review ever
Nyholt, DR. A Simple Correction for Multiple Testing for Single-Nucleotide Polymorphisms in Linkage Disequilibrium with Each Other. Am J Hum Genet. 2004 74(4): 765–769.
Nicodemus, KK, Liu, W, Chase, GA, Tsai, Y-Y, and Fallin, MD. Comparison of type I error for multiple test corrections in large single-nucleotide polymorphism studies using principal components versus haplotype blocking algorithms. BMC Genetics 2005; 6(Suppl 1): S78.
Peng, Q, Zhao, J, and Xue, F. PCA-based bootstrap confidence interval tests for gene-disease association involving multiple SNPs. BMC Genetics 2010, 11:6
Li, M, Romero, R, Fu, WJ, and Cui, Y (2010). Mapping Haplotype-haplotype Interactions with Adaptive LASSO. BMC Genetics 2010, 11:79 -- although not directly related to the question, it covers haplotype-based analysis/epistatic effect
|
Correcting p values for multiple tests where tests are correlated (genetics)
|
This is actually a hot topic in Genomewide analysis studies (GWAS)! I am not sure the method you are thinking of is the most appropriate in this context. Pooling of p-values was described by some auth
|
Correcting p values for multiple tests where tests are correlated (genetics)
This is actually a hot topic in Genomewide analysis studies (GWAS)! I am not sure the method you are thinking of is the most appropriate in this context. Pooling of p-values was described by some authors, but in a different context (replication studies or meta-analysis, see e.g. (1) for a recent review). Combining SNP p-values by Fisher's method is generally used when one wants to derive an unique p-value for a given gene; this allows to work at the gene level, and reduce the amount of dimensionality of subsequent testing, but as you said the non-independence between markers (arising from spatial colocation or linkage disiquilibrium, LD) introduce a bias. More powerful alternatives rely on resampling procedures, for example the use of maxT statistics for combining p-value and working at the gene level or when one is interested in pathway-based approaches, see e.g. (2) (§2.4 p. 93 provides details on their approach).
My main concerns with bootstraping (with replacement) would be that you are introducing an artificial form of relatedness, or in other words you create virtual twins, hence altering Hardy-Weinberg equilibrium (but also minimum allele frequency and call rate). This would not be the case with a permutation approach where you permute individual labels and keep the genotyping data as is. Usually, the plink software can give you raw and permuted p-values, although it uses (by default) an adaptive testing strategy with a sliding window that allows to stop running all permutations (say 1000 per SNP) if it appears that the SNP under consideration is not "interesting"; it also has option for computing maxT, see the online help.
But given the low number of SNPs you are considering, I would suggest relying on FDR-based or maxT tests as implemented in the multtest R package (see mt.maxT), but the definitive guide to resampling strategies for genomic application is Multiple Testing Procedures with Applications to Genomics, from Dudoit & van der Laan (Springer, 2008). See also Andrea Foulkes's book on genetics with R, which is reviewed in the JSS. She has great material on multiple testing procedures.
Further Notes
Many authors have pointed to the fact that simple multiple testing correcting methods such as the Bonferroni or Sidak are too stringent for adjusting the results for the individual SNPs. Moreover, neither of these methods take into account the correlation that exists between SNPs due to LD which tags the genetic variation across gene regions. Other alternative have bee proposed, like a derivative of Holm's method for multiple comparison (3), Hidden Markov Model (4), conditional or positive FDR (5) or derivative thereof (6), to name a few. So-called gap statistics or sliding window have been proved successful in some case, but you'll find a good review in (7) and (8).
I've also heard of methods that make effective use of the haplotype structure or LD, e.g. (9), but I never used them. They seem, however, more related to estimating the correlation between markers, not p-value as you meant. But in fact, you might better think in terms of the dependency structure between successive test statistics, than between correlated p-values.
References
Cantor, RM, Lange, K and Sinsheimer, JS. Prioritizing GWAS Results: A Review of Statistical Methods and Recommendations for Their Application. Am J Hum Genet. 2010 86(1): 6–22.
Corley, RP, Zeiger, JS, Crowley, T et al. Association of candidate genes with antisocial drug dependence in adolescents. Drug and Alcohol Dependence 2008 96: 90–98.
Dalmasso, C, Génin, E and Trégouet DA. A Weighted-Holm Procedure Accounting for Allele Frequencies in Genomewide Association Studies. Genetics 2008 180(1): 697–702.
Wei, Z, Sun, W, Wang, K, and Hakonarson, H. Multiple Testing in Genome-Wide Association Studies via Hidden Markov Models. Bioinformatics 2009 25(21): 2802-2808.
Broberg, P. A comparative review of estimates of the proportion unchanged genes and the false discovery rate. BMC Bioinformatics 2005 6: 199.
Need, AC, Ge, D, Weale, ME, et a. A Genome-Wide Investigation of SNPs and CNVs in Schizophrenia. PLoS Genet. 2009 5(2): e1000373.
Han, B, Kang, HM, and Eskin, E. Rapid and Accurate Multiple Testing Correction and Power Estimation for Millions of Correlated Markers. PLoS Genetics 2009
Liang, Y and Kelemen, A. Statistical advances and challenges for analyzing correlated high dimensional snp data in genomic study for complex diseases. Statistics Surveys 2008 2 :43–60. -- the best recent review ever
Nyholt, DR. A Simple Correction for Multiple Testing for Single-Nucleotide Polymorphisms in Linkage Disequilibrium with Each Other. Am J Hum Genet. 2004 74(4): 765–769.
Nicodemus, KK, Liu, W, Chase, GA, Tsai, Y-Y, and Fallin, MD. Comparison of type I error for multiple test corrections in large single-nucleotide polymorphism studies using principal components versus haplotype blocking algorithms. BMC Genetics 2005; 6(Suppl 1): S78.
Peng, Q, Zhao, J, and Xue, F. PCA-based bootstrap confidence interval tests for gene-disease association involving multiple SNPs. BMC Genetics 2010, 11:6
Li, M, Romero, R, Fu, WJ, and Cui, Y (2010). Mapping Haplotype-haplotype Interactions with Adaptive LASSO. BMC Genetics 2010, 11:79 -- although not directly related to the question, it covers haplotype-based analysis/epistatic effect
|
Correcting p values for multiple tests where tests are correlated (genetics)
This is actually a hot topic in Genomewide analysis studies (GWAS)! I am not sure the method you are thinking of is the most appropriate in this context. Pooling of p-values was described by some auth
|
9,249
|
Correcting p values for multiple tests where tests are correlated (genetics)
|
Using a method like bonferroni is fine, the problem is that if you have many tests you are not likely to find many "discoveries".
You can go with the FDR approach for dependent tests (see here for details) the problem is that I am not sure if you can say upfront if your correlations are all positive ones.
In R you can do simple FDR with p.adjust. For more complex things I would take a look at multcomp, but I didn't go through it to see for solutions in cases of dependencies.
Good luck.
|
Correcting p values for multiple tests where tests are correlated (genetics)
|
Using a method like bonferroni is fine, the problem is that if you have many tests you are not likely to find many "discoveries".
You can go with the FDR approach for dependent tests (see here for det
|
Correcting p values for multiple tests where tests are correlated (genetics)
Using a method like bonferroni is fine, the problem is that if you have many tests you are not likely to find many "discoveries".
You can go with the FDR approach for dependent tests (see here for details) the problem is that I am not sure if you can say upfront if your correlations are all positive ones.
In R you can do simple FDR with p.adjust. For more complex things I would take a look at multcomp, but I didn't go through it to see for solutions in cases of dependencies.
Good luck.
|
Correcting p values for multiple tests where tests are correlated (genetics)
Using a method like bonferroni is fine, the problem is that if you have many tests you are not likely to find many "discoveries".
You can go with the FDR approach for dependent tests (see here for det
|
9,250
|
Correcting p values for multiple tests where tests are correlated (genetics)
|
I think Multivariate Normal Models are being used to model the correlated p-values and to get the right type of multiple testing corrections. Rapid and Accurate Multiple Testing Correction and Power Estimation for Millions of Correlated Markers. PLoS Genet 2009 talks about them and also gives other references. It sounds similar to what you were talking about, but I think other than getting a more accurate global p-value correction, LD structure knowledge should also be use to remove spurious positives arising from markers correlated with causal markers.
|
Correcting p values for multiple tests where tests are correlated (genetics)
|
I think Multivariate Normal Models are being used to model the correlated p-values and to get the right type of multiple testing corrections. Rapid and Accurate Multiple Testing Correction and Power
|
Correcting p values for multiple tests where tests are correlated (genetics)
I think Multivariate Normal Models are being used to model the correlated p-values and to get the right type of multiple testing corrections. Rapid and Accurate Multiple Testing Correction and Power Estimation for Millions of Correlated Markers. PLoS Genet 2009 talks about them and also gives other references. It sounds similar to what you were talking about, but I think other than getting a more accurate global p-value correction, LD structure knowledge should also be use to remove spurious positives arising from markers correlated with causal markers.
|
Correcting p values for multiple tests where tests are correlated (genetics)
I think Multivariate Normal Models are being used to model the correlated p-values and to get the right type of multiple testing corrections. Rapid and Accurate Multiple Testing Correction and Power
|
9,251
|
Correcting p values for multiple tests where tests are correlated (genetics)
|
I am looking for a working solution for exactly the same problem. The best I found is the Null Unrestricted Bootstrap introduced by Foulkes Andrea in his book Applied Statistical Genetics with R(2009). Contrary to all bunch of other articles and books he considers specifically the regressions. Besides other methods he advises the Null Unrestricted Bootstrap, which is suitable where one cannot easily compute residuals (as in my case, where I model many independent regressions (basically simple correlations), each with the same response variable and different snip). I found this method to be also called the maxT method.
> attach(fms)
> Actn3Bin <- > data.frame(actn3_r577x!="TT",actn3_rs540874!="AA",actn3_rs1815739!="TT",actn3_1671064!="GG")
> Mod <- summary(lm(NDRM.CH~.,data=Actn3Bin))
> CoefObs <- as.vector(Mod$coefficients[-1,1])
> B <-1000
> TestStatBoot <- matrix(nrow=B,ncol=NSnps)
> for (i in 1:B){
+ SampID <- sample(1:Nobs,size=Nobs, replace=T)
+ Ynew <- NDRM.CH[!MissDat][SampID]
+ Xnew <- Actn3BinC[SampID,]
+ CoefBoot <- summary(lm(Ynew~.,data=Xnew))$coefficients[-1,1]
+ SEBoot <- summary(lm(Ynew~.,data=Xnew))$coefficients[-1,2]
+ if (length(CoefBoot)==length(CoefObs)){
+ TestStatBoot[i,] <- (CoefBoot-CoefObs)/SEBoot
+ }
+ }
Once we have the all the TestStatBoot matrix (in rows we have bootstrap replications, and in columns we have bootstrapped $\hat{\vec{T}^*}$ statistics) we find for which $T_{\text{crit.}}$ we observe exactly $\alpha=0.05$ percent of more significant $\hat{\vec{T}^*}$ statistics (more significant means that with bigger absolute value than $T_{\text{crit.}}$).
We report $i$-th model component significant, if its $\hat{\vec{T}_i} > T_{\text{crit.}}$
The last step can be accomplished with this code
p.value<-0.05 # The target alpha threshold
digits<-1000000
library(gtools) # for binsearch
pValueFun<-function(cj)
{
mean(apply(abs(TestStatBoot)>cj/digits,1,sum)>=1,na.rm=T)
}
ans<-binsearch(pValueFun,c(0.5*digits,100*digits),target=p.value)
p.level<-(1-pnorm(q=ans$where[[1]]/digits))*2 #two-sided.
|
Correcting p values for multiple tests where tests are correlated (genetics)
|
I am looking for a working solution for exactly the same problem. The best I found is the Null Unrestricted Bootstrap introduced by Foulkes Andrea in his book Applied Statistical Genetics with R(2009)
|
Correcting p values for multiple tests where tests are correlated (genetics)
I am looking for a working solution for exactly the same problem. The best I found is the Null Unrestricted Bootstrap introduced by Foulkes Andrea in his book Applied Statistical Genetics with R(2009). Contrary to all bunch of other articles and books he considers specifically the regressions. Besides other methods he advises the Null Unrestricted Bootstrap, which is suitable where one cannot easily compute residuals (as in my case, where I model many independent regressions (basically simple correlations), each with the same response variable and different snip). I found this method to be also called the maxT method.
> attach(fms)
> Actn3Bin <- > data.frame(actn3_r577x!="TT",actn3_rs540874!="AA",actn3_rs1815739!="TT",actn3_1671064!="GG")
> Mod <- summary(lm(NDRM.CH~.,data=Actn3Bin))
> CoefObs <- as.vector(Mod$coefficients[-1,1])
> B <-1000
> TestStatBoot <- matrix(nrow=B,ncol=NSnps)
> for (i in 1:B){
+ SampID <- sample(1:Nobs,size=Nobs, replace=T)
+ Ynew <- NDRM.CH[!MissDat][SampID]
+ Xnew <- Actn3BinC[SampID,]
+ CoefBoot <- summary(lm(Ynew~.,data=Xnew))$coefficients[-1,1]
+ SEBoot <- summary(lm(Ynew~.,data=Xnew))$coefficients[-1,2]
+ if (length(CoefBoot)==length(CoefObs)){
+ TestStatBoot[i,] <- (CoefBoot-CoefObs)/SEBoot
+ }
+ }
Once we have the all the TestStatBoot matrix (in rows we have bootstrap replications, and in columns we have bootstrapped $\hat{\vec{T}^*}$ statistics) we find for which $T_{\text{crit.}}$ we observe exactly $\alpha=0.05$ percent of more significant $\hat{\vec{T}^*}$ statistics (more significant means that with bigger absolute value than $T_{\text{crit.}}$).
We report $i$-th model component significant, if its $\hat{\vec{T}_i} > T_{\text{crit.}}$
The last step can be accomplished with this code
p.value<-0.05 # The target alpha threshold
digits<-1000000
library(gtools) # for binsearch
pValueFun<-function(cj)
{
mean(apply(abs(TestStatBoot)>cj/digits,1,sum)>=1,na.rm=T)
}
ans<-binsearch(pValueFun,c(0.5*digits,100*digits),target=p.value)
p.level<-(1-pnorm(q=ans$where[[1]]/digits))*2 #two-sided.
|
Correcting p values for multiple tests where tests are correlated (genetics)
I am looking for a working solution for exactly the same problem. The best I found is the Null Unrestricted Bootstrap introduced by Foulkes Andrea in his book Applied Statistical Genetics with R(2009)
|
9,252
|
How to know whether the data is linearly separable?
|
There are several methods to find whether the data is linearly separable, some of them are highlighted in this paper (1). With assumption of two classes in the dataset, following are few methods to find whether they are linearly separable:
Linear programming: Defines an objective function subjected to constraints that satisfy linear separability. You can find detail about implementation here.
Perceptron method: A perceptron is guaranteed to converge if the data is linearly separable.
Quadratic programming: Quadratic programming optimisation objective function can be defined with constraint as in SVM.
Computational geometry: If one can find two disjoint convex hulls then the data is linearly separable
Clustering method: If one can find two clusters with cluster purity of 100% using some clustering methods such as k-means, then the data is linearly separable.
(1): Elizondo, D., "The linear separability problem: some testing methods," in Neural Networks, IEEE Transactions on , vol.17, no.2, pp.330-344, March 2006
doi: 10.1109/TNN.2005.860871
|
How to know whether the data is linearly separable?
|
There are several methods to find whether the data is linearly separable, some of them are highlighted in this paper (1). With assumption of two classes in the dataset, following are few methods to fi
|
How to know whether the data is linearly separable?
There are several methods to find whether the data is linearly separable, some of them are highlighted in this paper (1). With assumption of two classes in the dataset, following are few methods to find whether they are linearly separable:
Linear programming: Defines an objective function subjected to constraints that satisfy linear separability. You can find detail about implementation here.
Perceptron method: A perceptron is guaranteed to converge if the data is linearly separable.
Quadratic programming: Quadratic programming optimisation objective function can be defined with constraint as in SVM.
Computational geometry: If one can find two disjoint convex hulls then the data is linearly separable
Clustering method: If one can find two clusters with cluster purity of 100% using some clustering methods such as k-means, then the data is linearly separable.
(1): Elizondo, D., "The linear separability problem: some testing methods," in Neural Networks, IEEE Transactions on , vol.17, no.2, pp.330-344, March 2006
doi: 10.1109/TNN.2005.860871
|
How to know whether the data is linearly separable?
There are several methods to find whether the data is linearly separable, some of them are highlighted in this paper (1). With assumption of two classes in the dataset, following are few methods to fi
|
9,253
|
How to know whether the data is linearly separable?
|
I assume you talk about a 2-class classification problem. In this case there's a line that separates your two classes and any classic algorithm should be able to find it when it converges.
In practice, you have to train and test on the same data. If there's such a line then you should get close to 100% accuracy or 100% AUC. If there isn't such a line then training and testing on the same data will result at least some errors. Based on the volume of the errors it may or may not worth trying a non-linear classifier.
|
How to know whether the data is linearly separable?
|
I assume you talk about a 2-class classification problem. In this case there's a line that separates your two classes and any classic algorithm should be able to find it when it converges.
In practic
|
How to know whether the data is linearly separable?
I assume you talk about a 2-class classification problem. In this case there's a line that separates your two classes and any classic algorithm should be able to find it when it converges.
In practice, you have to train and test on the same data. If there's such a line then you should get close to 100% accuracy or 100% AUC. If there isn't such a line then training and testing on the same data will result at least some errors. Based on the volume of the errors it may or may not worth trying a non-linear classifier.
|
How to know whether the data is linearly separable?
I assume you talk about a 2-class classification problem. In this case there's a line that separates your two classes and any classic algorithm should be able to find it when it converges.
In practic
|
9,254
|
How to know whether the data is linearly separable?
|
Consider the hard margin SVM formulation, which tries to find a hyperplane that strictly separates the data.
$$ min_{w,b} \space ||w||^2 $$
$$ s.t \space \forall i, (w'x_{i} + b)y_{i} \ge 1 $$
If our data is linearly seperable, all the inequality constraints will be satisfied. Notice that $w'x + b$ simply indicates which side of a plane a point lies on. Knowing the feasibility of the SVM problem is equivalent to knowing if our data is linearly separable. However, we don't actually care much about the objective for simply checking linear seperability. Can we solve a simpler feasibility problem, maybe a linear program?
The following LP can be solved to check the feasibility.
$$ min_{s,b, w} \space s$$
$$ s.t \space \forall i, (w'x_{i} + b)y_{i} \ge 1 - s $$
$$ s \ge 0$$
If optimal $s$ for this problem is zero, we know that the original inequality constraints can be satisfied. This means our data was linearly separable in the original space. Using separate $s_i$ for each training example can tell us which data-points cause linearly in-separability.
|
How to know whether the data is linearly separable?
|
Consider the hard margin SVM formulation, which tries to find a hyperplane that strictly separates the data.
$$ min_{w,b} \space ||w||^2 $$
$$ s.t \space \forall i, (w'x_{i} + b)y_{i} \ge 1
|
How to know whether the data is linearly separable?
Consider the hard margin SVM formulation, which tries to find a hyperplane that strictly separates the data.
$$ min_{w,b} \space ||w||^2 $$
$$ s.t \space \forall i, (w'x_{i} + b)y_{i} \ge 1 $$
If our data is linearly seperable, all the inequality constraints will be satisfied. Notice that $w'x + b$ simply indicates which side of a plane a point lies on. Knowing the feasibility of the SVM problem is equivalent to knowing if our data is linearly separable. However, we don't actually care much about the objective for simply checking linear seperability. Can we solve a simpler feasibility problem, maybe a linear program?
The following LP can be solved to check the feasibility.
$$ min_{s,b, w} \space s$$
$$ s.t \space \forall i, (w'x_{i} + b)y_{i} \ge 1 - s $$
$$ s \ge 0$$
If optimal $s$ for this problem is zero, we know that the original inequality constraints can be satisfied. This means our data was linearly separable in the original space. Using separate $s_i$ for each training example can tell us which data-points cause linearly in-separability.
|
How to know whether the data is linearly separable?
Consider the hard margin SVM formulation, which tries to find a hyperplane that strictly separates the data.
$$ min_{w,b} \space ||w||^2 $$
$$ s.t \space \forall i, (w'x_{i} + b)y_{i} \ge 1
|
9,255
|
Why maximum likelihood and not expected likelihood?
|
The method proposed (after normalizing the likelihood to be a density) is equivalent to estimating the parameters using a flat prior for all the parameters in the model and using the mean of the posterior distribution as your estimator. There are cases where using a flat prior can get you into trouble because you don't end up with a proper posterior distribution so I don't know how you would rectify that situation here.
Staying in a frequentist context, though, the method doesn't make much sense since the likelihood doesn't constitute a probability density in most contexts and there is nothing random left so taking an expectation doesn't make much sense. Now we can just formalize this as an operation we apply to the likelihood after the fact to obtain an estimate but I'm not sure what the frequentist properties of this estimator would look like (in the cases where the estimate actually exists).
Advantages:
This can provide an estimate in some cases where the MLE doesn't actually exist.
If you're not stubborn it can move you into a Bayesian setting (and that would probably be the natural way to do inference with this type of estimate). Ok so depending on your views this may not be an advantage - but it is to me.
Disadvantages:
This isn't guaranteed to exist either.
If we don't have a convex
parameter space the estimate may not be a valid value for the
parameter.
The process isn't invariant to reparameterization. Since the process is equivalent to putting a flat prior on your parameters it makes a difference what those parameters are (are we talking about using $\sigma$ as the parameter or are we using $\sigma^2$)
|
Why maximum likelihood and not expected likelihood?
|
The method proposed (after normalizing the likelihood to be a density) is equivalent to estimating the parameters using a flat prior for all the parameters in the model and using the mean of the poste
|
Why maximum likelihood and not expected likelihood?
The method proposed (after normalizing the likelihood to be a density) is equivalent to estimating the parameters using a flat prior for all the parameters in the model and using the mean of the posterior distribution as your estimator. There are cases where using a flat prior can get you into trouble because you don't end up with a proper posterior distribution so I don't know how you would rectify that situation here.
Staying in a frequentist context, though, the method doesn't make much sense since the likelihood doesn't constitute a probability density in most contexts and there is nothing random left so taking an expectation doesn't make much sense. Now we can just formalize this as an operation we apply to the likelihood after the fact to obtain an estimate but I'm not sure what the frequentist properties of this estimator would look like (in the cases where the estimate actually exists).
Advantages:
This can provide an estimate in some cases where the MLE doesn't actually exist.
If you're not stubborn it can move you into a Bayesian setting (and that would probably be the natural way to do inference with this type of estimate). Ok so depending on your views this may not be an advantage - but it is to me.
Disadvantages:
This isn't guaranteed to exist either.
If we don't have a convex
parameter space the estimate may not be a valid value for the
parameter.
The process isn't invariant to reparameterization. Since the process is equivalent to putting a flat prior on your parameters it makes a difference what those parameters are (are we talking about using $\sigma$ as the parameter or are we using $\sigma^2$)
|
Why maximum likelihood and not expected likelihood?
The method proposed (after normalizing the likelihood to be a density) is equivalent to estimating the parameters using a flat prior for all the parameters in the model and using the mean of the poste
|
9,256
|
Why maximum likelihood and not expected likelihood?
|
One reason is that maximum likelihood estimation is easier: you set the derivative of the likelihood w.r.t. the parameters to zero and solve for the parameters. Taking an expectation means integrating the likelihood times each parameter.
Another reason is that with exponential families, maximum likelihood estimation corresponds to taking an expectation. For example, the maximum likelihood normal distribution fitting data points $\{x_i\}$ has mean $\mu=E(x)$ and second moment $\chi=E(x^2)$.
In some cases, the maximum likelihood parameter is the same as the expected likelihood parameter. For example, the expected likelihood mean of the normal distribution above is the same as the maximum likelihood because the prior on the mean is normal, and the mode and mean of a normal distribution coincide. Of course that won't be true for the other parameter (however you parametrize it).
I think the most important reason is probably why do you want an expectation of the parameters? Usually, you are learning a model and the parameter values is all you want. If you're going to return a single value, isn't the maximum likelihood the best you can return?
|
Why maximum likelihood and not expected likelihood?
|
One reason is that maximum likelihood estimation is easier: you set the derivative of the likelihood w.r.t. the parameters to zero and solve for the parameters. Taking an expectation means integratin
|
Why maximum likelihood and not expected likelihood?
One reason is that maximum likelihood estimation is easier: you set the derivative of the likelihood w.r.t. the parameters to zero and solve for the parameters. Taking an expectation means integrating the likelihood times each parameter.
Another reason is that with exponential families, maximum likelihood estimation corresponds to taking an expectation. For example, the maximum likelihood normal distribution fitting data points $\{x_i\}$ has mean $\mu=E(x)$ and second moment $\chi=E(x^2)$.
In some cases, the maximum likelihood parameter is the same as the expected likelihood parameter. For example, the expected likelihood mean of the normal distribution above is the same as the maximum likelihood because the prior on the mean is normal, and the mode and mean of a normal distribution coincide. Of course that won't be true for the other parameter (however you parametrize it).
I think the most important reason is probably why do you want an expectation of the parameters? Usually, you are learning a model and the parameter values is all you want. If you're going to return a single value, isn't the maximum likelihood the best you can return?
|
Why maximum likelihood and not expected likelihood?
One reason is that maximum likelihood estimation is easier: you set the derivative of the likelihood w.r.t. the parameters to zero and solve for the parameters. Taking an expectation means integratin
|
9,257
|
Why maximum likelihood and not expected likelihood?
|
There is an interesting paper proposing to maximize not the observed likelihood, but the expected likelihood Expected Maximum Log Likelihood Estimation. In many examples this gives the same results as MLE, but in some examples where it is different, it as arguably better, or at least different in an interesting way.
Note that this is a pure frequentist idea, so is different from what is discussed in the other answers, where it is assumed that expectation is of the parameter itself, so some (quasi-)bayesian idea.
One example: Take the usual multiple linear regression model, with normal errors. Then the log-likelihood function is (up to a constant):
$$ \log L(\beta) = -\frac{n}{2}\log \sigma^2 - \frac1{2\sigma^2} (Y-X\beta)^T (Y-X\beta)
$$ which can be written (with $\hat{\beta}=(X^TX)^{-1} X^T Y$, the usual least-squares estimator of $\beta$)
$$
\left[ -\frac{n}{2\sigma^2}+\frac1{2\sigma^4}(Y-X\hat{\beta})^T(Y-X\hat{\beta})\right]+\frac1{2\sigma^4}(\hat{\beta}-\beta)^T X^T X(\hat{\beta}-\beta)
$$
The second term here is $\frac12 (\frac{\partial \log L}{\partial \beta})^T (X^T X)^{-1} \frac{\partial \log L}{\partial \beta})$ with expectation $\frac{p}{2\sigma^2}$, so the estimating equation for $\sigma^2$ becomes
$$
-\frac{n}{2\sigma^2}+\frac1{2\sigma^4}(Y-X\hat{\beta})^T (Y-X\hat{\beta}) + \frac{p}{2\sigma^4}
$$ where $p$ is the number of columns in $X$. The solution is the usual bias-corrected estimator, with denominator $n-p$, and not $n$, as for the MLE.
|
Why maximum likelihood and not expected likelihood?
|
There is an interesting paper proposing to maximize not the observed likelihood, but the expected likelihood Expected Maximum Log Likelihood Estimation. In many examples this gives the same results as
|
Why maximum likelihood and not expected likelihood?
There is an interesting paper proposing to maximize not the observed likelihood, but the expected likelihood Expected Maximum Log Likelihood Estimation. In many examples this gives the same results as MLE, but in some examples where it is different, it as arguably better, or at least different in an interesting way.
Note that this is a pure frequentist idea, so is different from what is discussed in the other answers, where it is assumed that expectation is of the parameter itself, so some (quasi-)bayesian idea.
One example: Take the usual multiple linear regression model, with normal errors. Then the log-likelihood function is (up to a constant):
$$ \log L(\beta) = -\frac{n}{2}\log \sigma^2 - \frac1{2\sigma^2} (Y-X\beta)^T (Y-X\beta)
$$ which can be written (with $\hat{\beta}=(X^TX)^{-1} X^T Y$, the usual least-squares estimator of $\beta$)
$$
\left[ -\frac{n}{2\sigma^2}+\frac1{2\sigma^4}(Y-X\hat{\beta})^T(Y-X\hat{\beta})\right]+\frac1{2\sigma^4}(\hat{\beta}-\beta)^T X^T X(\hat{\beta}-\beta)
$$
The second term here is $\frac12 (\frac{\partial \log L}{\partial \beta})^T (X^T X)^{-1} \frac{\partial \log L}{\partial \beta})$ with expectation $\frac{p}{2\sigma^2}$, so the estimating equation for $\sigma^2$ becomes
$$
-\frac{n}{2\sigma^2}+\frac1{2\sigma^4}(Y-X\hat{\beta})^T (Y-X\hat{\beta}) + \frac{p}{2\sigma^4}
$$ where $p$ is the number of columns in $X$. The solution is the usual bias-corrected estimator, with denominator $n-p$, and not $n$, as for the MLE.
|
Why maximum likelihood and not expected likelihood?
There is an interesting paper proposing to maximize not the observed likelihood, but the expected likelihood Expected Maximum Log Likelihood Estimation. In many examples this gives the same results as
|
9,258
|
Why maximum likelihood and not expected likelihood?
|
This approach exists and it is called Minimum Contrast Estimator. The example of related paper (and see other references from inside)
https://arxiv.org/abs/0901.0655
|
Why maximum likelihood and not expected likelihood?
|
This approach exists and it is called Minimum Contrast Estimator. The example of related paper (and see other references from inside)
https://arxiv.org/abs/0901.0655
|
Why maximum likelihood and not expected likelihood?
This approach exists and it is called Minimum Contrast Estimator. The example of related paper (and see other references from inside)
https://arxiv.org/abs/0901.0655
|
Why maximum likelihood and not expected likelihood?
This approach exists and it is called Minimum Contrast Estimator. The example of related paper (and see other references from inside)
https://arxiv.org/abs/0901.0655
|
9,259
|
Inter-rater reliability for ordinal or interval data
|
The Kappa ($\kappa$) statistic is a quality index that compares observed agreement between 2 raters on a nominal or ordinal scale with agreement expected by chance alone (as if raters were tossing up). Extensions for the case of multiple raters exist (2, pp. 284–291). In the case of ordinal data, you can use the weighted $\kappa$, which basically reads as usual $\kappa$ with off-diagonal elements contributing to the measure of agreement. Fleiss (3) provided guidelines to interpret $\kappa$ values but these are merely rules of thumbs.
The $\kappa$ statistic is asymptotically equivalent to the ICC estimated from a two-way random effects ANOVA, but significance tests and SE coming from the usual ANOVA framework are not valid anymore with binary data. It is better to use bootstrap to get confidence interval (CI). Fleiss (8) discussed the connection between weighted kappa and the intraclass correlation (ICC).
It should be noted that some psychometricians don't very much like $\kappa$ because it is affected by the prevalence of the object of measurement much like predictive values are affected by the prevalence of the disease under consideration, and this can lead to paradoxical results.
Inter-rater reliability for $k$ raters can be estimated with Kendall’s coefficient of concordance, $W$. When the number of items or units that are rated $n > 7$, $k(n − 1)W \sim \chi^2(n − 1)$. (2, pp. 269–270). This asymptotic approximation is valid for moderate value of $n$ and $k$ (6), but with less than 20 items $F$ or permutation tests are more suitable (7). There is a close relationship between Spearman’s $\rho$ and Kendall’s $W$ statistic: $W$ can be directly calculated from the mean of the pairwise Spearman correlations (for untied observations only).
Polychoric (ordinal data) correlation may also be used as a measure of inter-rater agreement. Indeed, they allow to
estimate what would be the correlation if ratings were made on a continuous scale,
test marginal homogeneity between raters.
In fact, it can be shown that it is a special case of latent trait modeling, which allows to relax distributional assumptions (4).
About continuous (or so assumed) measurements, the ICC which quantifies the proportion of variance attributable to the between-subject variation is fine. Again, bootstraped CIs are recommended. As @ars said, there are basically two versions -- agreement and consistency -- that are applicable in the case of agreement studies (5), and that mainly differ on the way sum of squares are computed; the “consistency” ICC is generally estimated without considering the Item×Rater interaction. The ANOVA framework is useful with specific block design where one wants to minimize the number of ratings (BIBD) -- in fact, this was one of the original motivation of Fleiss's work. It is also the best way to go for multiple raters. The natural extension of this approach is called the Generalizability Theory. A brief overview is given in Rater Models: An Introduction, otherwise the standard reference is Brennan's book, reviewed in Psychometrika 2006 71(3).
As for general references, I recommend chapter 3 of Statistics in Psychiatry, from Graham Dunn (Hodder Arnold, 2000). For a more complete treatment of reliability studies, the best reference to date is
Dunn, G (2004). Design and Analysis of
Reliability Studies. Arnold. See the
review in the International Journal
of Epidemiology.
A good online introduction is available on John Uebersax's website, Intraclass Correlation and Related Methods; it includes a discussion of the pros and cons of the ICC approach, especially with respect to ordinal scales.
Relevant R packages for two-way assessment (ordinal or continuous measurements) are found in the Psychometrics Task View; I generally use either the psy, psych, or irr packages. There's also the concord package but I never used it. For dealing with more than two raters, the lme4 package is the way to go for it allows to easily incorporate random effects, but most of the reliability designs can be analysed using the aov() because we only need to estimate variance components.
References
J Cohen. Weighted kappa: Nominal scale agreement with provision for scales disagreement of partial credit. Psychological Bulletin, 70, 213–220, 1968.
S Siegel and Jr N John Castellan. Nonparametric Statistics for the Behavioral
Sciences. McGraw-Hill, Second edition, 1988.
J L Fleiss. Statistical Methods for Rates and Proportions. New York: Wiley, Second
edition, 1981.
J S Uebersax. The tetrachoric and polychoric correlation coefficients. Statistical Methods for Rater Agreement web site, 2006. Available at: http://john-uebersax.com/stat/tetra.htm. Accessed February 24, 2010.
P E Shrout and J L Fleiss. Intraclass correlation: Uses in assessing rater reliability. Psychological Bulletin, 86, 420–428, 1979.
M G Kendall and B Babington Smith. The problem of m rankings. Annals of Mathematical Statistics, 10, 275–287, 1939.
P Legendre. Coefficient of concordance. In N J Salkind, editor, Encyclopedia of Research Design. SAGE Publications, 2010.
J L Fleiss. The equivalence of weighted kappa and the intraclass correlation coefficient as measures of reliability. Educational and Psychological Measurement, 33, 613-619, 1973.
|
Inter-rater reliability for ordinal or interval data
|
The Kappa ($\kappa$) statistic is a quality index that compares observed agreement between 2 raters on a nominal or ordinal scale with agreement expected by chance alone (as if raters were tossing up)
|
Inter-rater reliability for ordinal or interval data
The Kappa ($\kappa$) statistic is a quality index that compares observed agreement between 2 raters on a nominal or ordinal scale with agreement expected by chance alone (as if raters were tossing up). Extensions for the case of multiple raters exist (2, pp. 284–291). In the case of ordinal data, you can use the weighted $\kappa$, which basically reads as usual $\kappa$ with off-diagonal elements contributing to the measure of agreement. Fleiss (3) provided guidelines to interpret $\kappa$ values but these are merely rules of thumbs.
The $\kappa$ statistic is asymptotically equivalent to the ICC estimated from a two-way random effects ANOVA, but significance tests and SE coming from the usual ANOVA framework are not valid anymore with binary data. It is better to use bootstrap to get confidence interval (CI). Fleiss (8) discussed the connection between weighted kappa and the intraclass correlation (ICC).
It should be noted that some psychometricians don't very much like $\kappa$ because it is affected by the prevalence of the object of measurement much like predictive values are affected by the prevalence of the disease under consideration, and this can lead to paradoxical results.
Inter-rater reliability for $k$ raters can be estimated with Kendall’s coefficient of concordance, $W$. When the number of items or units that are rated $n > 7$, $k(n − 1)W \sim \chi^2(n − 1)$. (2, pp. 269–270). This asymptotic approximation is valid for moderate value of $n$ and $k$ (6), but with less than 20 items $F$ or permutation tests are more suitable (7). There is a close relationship between Spearman’s $\rho$ and Kendall’s $W$ statistic: $W$ can be directly calculated from the mean of the pairwise Spearman correlations (for untied observations only).
Polychoric (ordinal data) correlation may also be used as a measure of inter-rater agreement. Indeed, they allow to
estimate what would be the correlation if ratings were made on a continuous scale,
test marginal homogeneity between raters.
In fact, it can be shown that it is a special case of latent trait modeling, which allows to relax distributional assumptions (4).
About continuous (or so assumed) measurements, the ICC which quantifies the proportion of variance attributable to the between-subject variation is fine. Again, bootstraped CIs are recommended. As @ars said, there are basically two versions -- agreement and consistency -- that are applicable in the case of agreement studies (5), and that mainly differ on the way sum of squares are computed; the “consistency” ICC is generally estimated without considering the Item×Rater interaction. The ANOVA framework is useful with specific block design where one wants to minimize the number of ratings (BIBD) -- in fact, this was one of the original motivation of Fleiss's work. It is also the best way to go for multiple raters. The natural extension of this approach is called the Generalizability Theory. A brief overview is given in Rater Models: An Introduction, otherwise the standard reference is Brennan's book, reviewed in Psychometrika 2006 71(3).
As for general references, I recommend chapter 3 of Statistics in Psychiatry, from Graham Dunn (Hodder Arnold, 2000). For a more complete treatment of reliability studies, the best reference to date is
Dunn, G (2004). Design and Analysis of
Reliability Studies. Arnold. See the
review in the International Journal
of Epidemiology.
A good online introduction is available on John Uebersax's website, Intraclass Correlation and Related Methods; it includes a discussion of the pros and cons of the ICC approach, especially with respect to ordinal scales.
Relevant R packages for two-way assessment (ordinal or continuous measurements) are found in the Psychometrics Task View; I generally use either the psy, psych, or irr packages. There's also the concord package but I never used it. For dealing with more than two raters, the lme4 package is the way to go for it allows to easily incorporate random effects, but most of the reliability designs can be analysed using the aov() because we only need to estimate variance components.
References
J Cohen. Weighted kappa: Nominal scale agreement with provision for scales disagreement of partial credit. Psychological Bulletin, 70, 213–220, 1968.
S Siegel and Jr N John Castellan. Nonparametric Statistics for the Behavioral
Sciences. McGraw-Hill, Second edition, 1988.
J L Fleiss. Statistical Methods for Rates and Proportions. New York: Wiley, Second
edition, 1981.
J S Uebersax. The tetrachoric and polychoric correlation coefficients. Statistical Methods for Rater Agreement web site, 2006. Available at: http://john-uebersax.com/stat/tetra.htm. Accessed February 24, 2010.
P E Shrout and J L Fleiss. Intraclass correlation: Uses in assessing rater reliability. Psychological Bulletin, 86, 420–428, 1979.
M G Kendall and B Babington Smith. The problem of m rankings. Annals of Mathematical Statistics, 10, 275–287, 1939.
P Legendre. Coefficient of concordance. In N J Salkind, editor, Encyclopedia of Research Design. SAGE Publications, 2010.
J L Fleiss. The equivalence of weighted kappa and the intraclass correlation coefficient as measures of reliability. Educational and Psychological Measurement, 33, 613-619, 1973.
|
Inter-rater reliability for ordinal or interval data
The Kappa ($\kappa$) statistic is a quality index that compares observed agreement between 2 raters on a nominal or ordinal scale with agreement expected by chance alone (as if raters were tossing up)
|
9,260
|
Inter-rater reliability for ordinal or interval data
|
The Intraclass correlation may be used for ordinal data. But there are some caveats, primarily that the raters cannot be distinguished. For more on this and how to choose among different versions of the ICC, see:
Intraclass correlations: uses in assessing rater reliability (Shrout, Fleiss, 1979)
|
Inter-rater reliability for ordinal or interval data
|
The Intraclass correlation may be used for ordinal data. But there are some caveats, primarily that the raters cannot be distinguished. For more on this and how to choose among different versions of
|
Inter-rater reliability for ordinal or interval data
The Intraclass correlation may be used for ordinal data. But there are some caveats, primarily that the raters cannot be distinguished. For more on this and how to choose among different versions of the ICC, see:
Intraclass correlations: uses in assessing rater reliability (Shrout, Fleiss, 1979)
|
Inter-rater reliability for ordinal or interval data
The Intraclass correlation may be used for ordinal data. But there are some caveats, primarily that the raters cannot be distinguished. For more on this and how to choose among different versions of
|
9,261
|
Multi-layer perceptron vs deep neural network
|
One can consider multi-layer perceptron (MLP) to be a subset of deep neural networks (DNN), but are often used interchangeably in literature.
The assumption that perceptrons are named based on their learning rule is incorrect. The classical "perceptron update rule" is one of the ways that can be used to train it. The early rejection of neural networks was because of this very reason, as the perceptron update rule was prone to vanishing and exploding gradients, making it impossible to train networks with more than a layer.
The use of back-propagation in training networks led to using alternate squashing activation functions such as tanh and sigmoid.
So, to answer the questions,
the question is. Is a "multi-layer perceptron" the same thing as a "deep neural network"?
MLP is subset of DNN. While DNN can have loops and MLP are always feed-forward, i.e.,
A multi layer perceptrons (MLP)is a finite acyclic graph
why is this terminology used?
A lot of the terminologies used in the literature of science has got to do with trends of the time and has caught on.
How broad is this terminology? Would one use the term "multi-layered perceptron" when referring to, for example, Inception net? How about for a recurrent network using LSTM modules used in NLP?
So, yes inception, convolutional network, resnet etc are all MLP because there is no cycle between connections. Even if there is a shortcut connections skipping layers, as long as it is in forward direction, it can be called a multilayer perceptron. But, LSTMs, or Vanilla RNNs etc have cyclic connections, hence cannot be called MLPs but are a subset of DNN.
This is my understanding of things. Please correct me if I am wrong.
Reference Links:
https://cs.stackexchange.com/questions/53521/what-is-difference-between-multilayer-perceptron-and-multilayer-neural-network
https://en.wikipedia.org/wiki/Multilayer_perceptron
https://en.wikipedia.org/wiki/Perceptron
http://ml.informatik.uni-freiburg.de/former/_media/teaching/ss10/05_mlps.printer.pdf
|
Multi-layer perceptron vs deep neural network
|
One can consider multi-layer perceptron (MLP) to be a subset of deep neural networks (DNN), but are often used interchangeably in literature.
The assumption that perceptrons are named based on their l
|
Multi-layer perceptron vs deep neural network
One can consider multi-layer perceptron (MLP) to be a subset of deep neural networks (DNN), but are often used interchangeably in literature.
The assumption that perceptrons are named based on their learning rule is incorrect. The classical "perceptron update rule" is one of the ways that can be used to train it. The early rejection of neural networks was because of this very reason, as the perceptron update rule was prone to vanishing and exploding gradients, making it impossible to train networks with more than a layer.
The use of back-propagation in training networks led to using alternate squashing activation functions such as tanh and sigmoid.
So, to answer the questions,
the question is. Is a "multi-layer perceptron" the same thing as a "deep neural network"?
MLP is subset of DNN. While DNN can have loops and MLP are always feed-forward, i.e.,
A multi layer perceptrons (MLP)is a finite acyclic graph
why is this terminology used?
A lot of the terminologies used in the literature of science has got to do with trends of the time and has caught on.
How broad is this terminology? Would one use the term "multi-layered perceptron" when referring to, for example, Inception net? How about for a recurrent network using LSTM modules used in NLP?
So, yes inception, convolutional network, resnet etc are all MLP because there is no cycle between connections. Even if there is a shortcut connections skipping layers, as long as it is in forward direction, it can be called a multilayer perceptron. But, LSTMs, or Vanilla RNNs etc have cyclic connections, hence cannot be called MLPs but are a subset of DNN.
This is my understanding of things. Please correct me if I am wrong.
Reference Links:
https://cs.stackexchange.com/questions/53521/what-is-difference-between-multilayer-perceptron-and-multilayer-neural-network
https://en.wikipedia.org/wiki/Multilayer_perceptron
https://en.wikipedia.org/wiki/Perceptron
http://ml.informatik.uni-freiburg.de/former/_media/teaching/ss10/05_mlps.printer.pdf
|
Multi-layer perceptron vs deep neural network
One can consider multi-layer perceptron (MLP) to be a subset of deep neural networks (DNN), but are often used interchangeably in literature.
The assumption that perceptrons are named based on their l
|
9,262
|
Multi-layer perceptron vs deep neural network
|
Good question: note that in the field of Deep Learning things are not always as well-cut and clearly defined as in Statistical Learning (also because there's a lot of hype), so don't expect to find definitions as rigorous as in Mathematics. Anyway, the multilayer perceptron is a specific feed-forward neural network architecture, where you stack up multiple fully-connected layers (so, no convolution layers at all), where the activation functions of the hidden units are often a sigmoid or a tanh. The nodes of the output layer usually have softmax activation functions (for classification) or linear activation functions (for regression). The typical MLP architectures are not "deep", i.e., we don't have many hidden layers. You usually have, say, 1 to 5 hidden layers. These neural networks were common in the '80, and are trained by backpropagation.
Now, with Deep Neural Network we mean a network which has many layers (19, 22, 152,...even > 1200, though that admittedly is very extreme). Note that
we haven't specified the architecture of the network, so this could be feed-forward, recurrent, etc.
we haven't specified the nature of the connections, so we could have fully connected layers, convolutional layers, recurrence, etc.
"many" layers admittedly is not a rigorous definition.
So, why does it still make sense to speak of DNNs (apart from hype reasons)? Because when you start stacking more and more layers, you actually need to use new techniques (new activation functions, new kind of layers, new optimization strategies...even new hardware) to be able to 1) train your model and 2) make it generalize on new cases. For example, suppose you take a classical MLP for 10-class classification, tanh activation functions, input & hidden layers with 32 units each and output layer with 10 softmax units $\Rightarrow 32\times32+32\times10 = 1344$ weights. You add 10 layers $\Rightarrow 11584$ weights. This is a minuscule NN by today's standards. However, when you go on to train it on a suitably large data set, you find that the convergence rate has slowed down tremendously. This is not only due to the larger number of weights, but to the vanishing gradient problem - back-propagation computes the gradient of the loss function by multiplying errors across each layers, and these small numbers become exponentially smaller the more layers you add. Thus, the errors don't propagate (or propagate very slowly) down your network, and it looks like the error on the training set stops decreasing with training epochs.
And this was a small network - the deep Convolutional Neural Networks called AlexNet had 5 layers but 60 millions weights, and it's considered small by today's standards! When you have so many weights, then any data set is "small" - even ImageNet, a data set of images used for classification, has "only" about 1 million images, thus the risk of overfitting is much larger than for shallow network.
Deep Learning can thus be understood as the set of tools which are used in practice to train neural networks with a large number of layers and weights, achieving low generalization error. This task poses more challenges than for smaller networks. You can definitely build a Deep Multilayer Perceptron and train it - but (apart from the fact that it's not the optimal architecture for many tasks where Deep Learning is used today) you will probably use tools which are different from those used when networks used to be "shallow". For example, you may prefer ReLU activation units to sigmoid or tanh, because they soften the vanishing gradient problem.
|
Multi-layer perceptron vs deep neural network
|
Good question: note that in the field of Deep Learning things are not always as well-cut and clearly defined as in Statistical Learning (also because there's a lot of hype), so don't expect to find de
|
Multi-layer perceptron vs deep neural network
Good question: note that in the field of Deep Learning things are not always as well-cut and clearly defined as in Statistical Learning (also because there's a lot of hype), so don't expect to find definitions as rigorous as in Mathematics. Anyway, the multilayer perceptron is a specific feed-forward neural network architecture, where you stack up multiple fully-connected layers (so, no convolution layers at all), where the activation functions of the hidden units are often a sigmoid or a tanh. The nodes of the output layer usually have softmax activation functions (for classification) or linear activation functions (for regression). The typical MLP architectures are not "deep", i.e., we don't have many hidden layers. You usually have, say, 1 to 5 hidden layers. These neural networks were common in the '80, and are trained by backpropagation.
Now, with Deep Neural Network we mean a network which has many layers (19, 22, 152,...even > 1200, though that admittedly is very extreme). Note that
we haven't specified the architecture of the network, so this could be feed-forward, recurrent, etc.
we haven't specified the nature of the connections, so we could have fully connected layers, convolutional layers, recurrence, etc.
"many" layers admittedly is not a rigorous definition.
So, why does it still make sense to speak of DNNs (apart from hype reasons)? Because when you start stacking more and more layers, you actually need to use new techniques (new activation functions, new kind of layers, new optimization strategies...even new hardware) to be able to 1) train your model and 2) make it generalize on new cases. For example, suppose you take a classical MLP for 10-class classification, tanh activation functions, input & hidden layers with 32 units each and output layer with 10 softmax units $\Rightarrow 32\times32+32\times10 = 1344$ weights. You add 10 layers $\Rightarrow 11584$ weights. This is a minuscule NN by today's standards. However, when you go on to train it on a suitably large data set, you find that the convergence rate has slowed down tremendously. This is not only due to the larger number of weights, but to the vanishing gradient problem - back-propagation computes the gradient of the loss function by multiplying errors across each layers, and these small numbers become exponentially smaller the more layers you add. Thus, the errors don't propagate (or propagate very slowly) down your network, and it looks like the error on the training set stops decreasing with training epochs.
And this was a small network - the deep Convolutional Neural Networks called AlexNet had 5 layers but 60 millions weights, and it's considered small by today's standards! When you have so many weights, then any data set is "small" - even ImageNet, a data set of images used for classification, has "only" about 1 million images, thus the risk of overfitting is much larger than for shallow network.
Deep Learning can thus be understood as the set of tools which are used in practice to train neural networks with a large number of layers and weights, achieving low generalization error. This task poses more challenges than for smaller networks. You can definitely build a Deep Multilayer Perceptron and train it - but (apart from the fact that it's not the optimal architecture for many tasks where Deep Learning is used today) you will probably use tools which are different from those used when networks used to be "shallow". For example, you may prefer ReLU activation units to sigmoid or tanh, because they soften the vanishing gradient problem.
|
Multi-layer perceptron vs deep neural network
Good question: note that in the field of Deep Learning things are not always as well-cut and clearly defined as in Statistical Learning (also because there's a lot of hype), so don't expect to find de
|
9,263
|
Multi-layer perceptron vs deep neural network
|
I wanna add that according to what I have read from many posts :
There are many different architecture through DNN like : MLPs (Multi-Layer Perceptron) and CNNs (Convolutional Neural Networks).So different type of DNN designed to solve different types of problems.
MLPs is classical type of NN which is used for :
Tabular Data-sets (contain data in a columnar format as in a database table ).
Classification / Regression , prediction pbs.
MLPs are very and can be used generally to lean mapping from in put to outputs.
But you can try for other format like image data as base line point of comparison
to confirm that other models are more suitable.
CNNs designed to map image data to an output variable. it's used for :
Image data,
classification/Regression prediction pbs,
It work well with data that has Spacial relationships.
It's traditionally used for 2D data but it can be used for 1D data, CNNs achieves the state of the art on some 1D pbs.
You have first to "define clearly" what you aim to solve as problem (what kind of data to work with, classification/regression problem ...etc) to know which type of architecture to use.
You can refer to those links that have been so useful to me to understand more about those concepts :).
MLPS.
CNNs.
When to Use MLP, CNN, and RNN Neural Networks.
Hope this add will be useful :p.
|
Multi-layer perceptron vs deep neural network
|
I wanna add that according to what I have read from many posts :
There are many different architecture through DNN like : MLPs (Multi-Layer Perceptron) and CNNs (Convolutional Neural Networks).So dif
|
Multi-layer perceptron vs deep neural network
I wanna add that according to what I have read from many posts :
There are many different architecture through DNN like : MLPs (Multi-Layer Perceptron) and CNNs (Convolutional Neural Networks).So different type of DNN designed to solve different types of problems.
MLPs is classical type of NN which is used for :
Tabular Data-sets (contain data in a columnar format as in a database table ).
Classification / Regression , prediction pbs.
MLPs are very and can be used generally to lean mapping from in put to outputs.
But you can try for other format like image data as base line point of comparison
to confirm that other models are more suitable.
CNNs designed to map image data to an output variable. it's used for :
Image data,
classification/Regression prediction pbs,
It work well with data that has Spacial relationships.
It's traditionally used for 2D data but it can be used for 1D data, CNNs achieves the state of the art on some 1D pbs.
You have first to "define clearly" what you aim to solve as problem (what kind of data to work with, classification/regression problem ...etc) to know which type of architecture to use.
You can refer to those links that have been so useful to me to understand more about those concepts :).
MLPS.
CNNs.
When to Use MLP, CNN, and RNN Neural Networks.
Hope this add will be useful :p.
|
Multi-layer perceptron vs deep neural network
I wanna add that according to what I have read from many posts :
There are many different architecture through DNN like : MLPs (Multi-Layer Perceptron) and CNNs (Convolutional Neural Networks).So dif
|
9,264
|
ANOVA vs multiple linear regression? Why is ANOVA so commonly used in experimental studies?
|
It would be interesting to appreciate that the divergence is in the type of variables, and more notably the types of explanatory variables. In the typical ANOVA we have a categorical variable with different groups, and we attempt to determine whether the measurement of a continuous variable differs between groups. On the other hand, OLS tends to be perceived as primarily an attempt at assessing the relationship between a continuous regressand or response variable and one or multiple regressors or explanatory variables. In this sense regression can be viewed as a different technique, lending itself to predicting values based on a regression line.
However, this difference does not stand the extension of ANOVA to the rest of the analysis of variance alphabet soup (ANCOVA, MANOVA, MANCOVA); or the inclusion of dummy-coded variables in the OLS regression. I'm unclear about the specific historical landmarks, but it is as if both techniques have grown parallel adaptations to tackle increasingly complex models.
For example, we can see that the differences between ANCOVA versus OLS with dummy (or categorical) variables (in both cases with interactions) are cosmetic at most. Please excuse my departure from the confines in the title of your question, regarding multiple linear regression.
In both cases, the model is essentially identical to the point that in R the lm function is used to carry out ANCOVA. However, it can be presented as different with regards to the inclusion of an intercept corresponding to the first level (or group) of the factor (or categorical) variable in the regression model.
In a balanced model (equally sized $i$ groups, $n_{1,2,\cdots\, i}$) and just one covariate (to simplify the matrix presentation), the model matrix in ANCOVA can be encountered as some variation of:
$$X=\begin{bmatrix}
1_{n_1} & 0 & 0 & x_{n_1} & 0 & 0\\
0 & 1_{n_2} & 0 & 0 & x_{n_2} & 0\\
0 & 0 & 1_{n_3} & 0 & 0 & x_{n_3}
\end{bmatrix}$$
for $3$ groups of the factor variable, expressed as block matrices.
This corresponds to linear model:
$$y = \alpha_i + \beta_1\, x_{n_1}+ \beta_2\,x_{n_2} \,+ \beta_3\,x_{n_3}\,+ \epsilon_i$$ with $\alpha_i$ equivalent to the different group means in an ANOVA model, while the different $\beta$'s are the slopes of the covariate for each one of the groups.
The presentation of the same model in the regression field, and specifically in R, considers an overall intercept, corresponding to one of the groups, and the model matrix could be presented as:
$$X=\begin{bmatrix}
\color{red}\vdots & 0 & 0 &\color{red}\vdots & 0 &0 & 0\\
\color{red}{J_{3n,1}} & 1_{n_2} & 0 & \color{red}{x} & 0 & x_{n_2} & 0\\
\color{red}\vdots& 0 & 1_{n_3} & \color{red}\vdots & 0 & 0 & x_{n_3}
\end{bmatrix}$$
of the OLS equation:
$$y =\color{red}{\beta_0} + \mu_i +\beta_1\, x_{n_1}+ \beta_2\,x_{n_2} \,+ \beta_3\,x_{n_3}\,+ \epsilon_i$$.
In this model, the overall intercept $\beta_0$ is modified at each group level by $\mu_i$, and the groups also have different slopes.
As you can see from the model matrices, the presentation belies the actual identity between regression and analysis of variance.
I like to kind of verify this with some lines of code and my favorite data set mtcars in R. I am using lm for ANCOVA according to Ben Bolker's paper available here.
mtcars$cyl <- as.factor(mtcars$cyl) # Cylinders variable into factor w 3 levels
D <- mtcars # The data set will be called D.
D <- D[order(D$cyl, decreasing = FALSE),] # Ordering obs. for block matrices.
model.matrix(lm(mpg ~ wt * cyl, D)) # This is the model matrix for ANCOVA
As to the part of the question about what method to use (regression with R!) you may find amusing this on-line commentary I came across while writing this post.
|
ANOVA vs multiple linear regression? Why is ANOVA so commonly used in experimental studies?
|
It would be interesting to appreciate that the divergence is in the type of variables, and more notably the types of explanatory variables. In the typical ANOVA we have a categorical variable with dif
|
ANOVA vs multiple linear regression? Why is ANOVA so commonly used in experimental studies?
It would be interesting to appreciate that the divergence is in the type of variables, and more notably the types of explanatory variables. In the typical ANOVA we have a categorical variable with different groups, and we attempt to determine whether the measurement of a continuous variable differs between groups. On the other hand, OLS tends to be perceived as primarily an attempt at assessing the relationship between a continuous regressand or response variable and one or multiple regressors or explanatory variables. In this sense regression can be viewed as a different technique, lending itself to predicting values based on a regression line.
However, this difference does not stand the extension of ANOVA to the rest of the analysis of variance alphabet soup (ANCOVA, MANOVA, MANCOVA); or the inclusion of dummy-coded variables in the OLS regression. I'm unclear about the specific historical landmarks, but it is as if both techniques have grown parallel adaptations to tackle increasingly complex models.
For example, we can see that the differences between ANCOVA versus OLS with dummy (or categorical) variables (in both cases with interactions) are cosmetic at most. Please excuse my departure from the confines in the title of your question, regarding multiple linear regression.
In both cases, the model is essentially identical to the point that in R the lm function is used to carry out ANCOVA. However, it can be presented as different with regards to the inclusion of an intercept corresponding to the first level (or group) of the factor (or categorical) variable in the regression model.
In a balanced model (equally sized $i$ groups, $n_{1,2,\cdots\, i}$) and just one covariate (to simplify the matrix presentation), the model matrix in ANCOVA can be encountered as some variation of:
$$X=\begin{bmatrix}
1_{n_1} & 0 & 0 & x_{n_1} & 0 & 0\\
0 & 1_{n_2} & 0 & 0 & x_{n_2} & 0\\
0 & 0 & 1_{n_3} & 0 & 0 & x_{n_3}
\end{bmatrix}$$
for $3$ groups of the factor variable, expressed as block matrices.
This corresponds to linear model:
$$y = \alpha_i + \beta_1\, x_{n_1}+ \beta_2\,x_{n_2} \,+ \beta_3\,x_{n_3}\,+ \epsilon_i$$ with $\alpha_i$ equivalent to the different group means in an ANOVA model, while the different $\beta$'s are the slopes of the covariate for each one of the groups.
The presentation of the same model in the regression field, and specifically in R, considers an overall intercept, corresponding to one of the groups, and the model matrix could be presented as:
$$X=\begin{bmatrix}
\color{red}\vdots & 0 & 0 &\color{red}\vdots & 0 &0 & 0\\
\color{red}{J_{3n,1}} & 1_{n_2} & 0 & \color{red}{x} & 0 & x_{n_2} & 0\\
\color{red}\vdots& 0 & 1_{n_3} & \color{red}\vdots & 0 & 0 & x_{n_3}
\end{bmatrix}$$
of the OLS equation:
$$y =\color{red}{\beta_0} + \mu_i +\beta_1\, x_{n_1}+ \beta_2\,x_{n_2} \,+ \beta_3\,x_{n_3}\,+ \epsilon_i$$.
In this model, the overall intercept $\beta_0$ is modified at each group level by $\mu_i$, and the groups also have different slopes.
As you can see from the model matrices, the presentation belies the actual identity between regression and analysis of variance.
I like to kind of verify this with some lines of code and my favorite data set mtcars in R. I am using lm for ANCOVA according to Ben Bolker's paper available here.
mtcars$cyl <- as.factor(mtcars$cyl) # Cylinders variable into factor w 3 levels
D <- mtcars # The data set will be called D.
D <- D[order(D$cyl, decreasing = FALSE),] # Ordering obs. for block matrices.
model.matrix(lm(mpg ~ wt * cyl, D)) # This is the model matrix for ANCOVA
As to the part of the question about what method to use (regression with R!) you may find amusing this on-line commentary I came across while writing this post.
|
ANOVA vs multiple linear regression? Why is ANOVA so commonly used in experimental studies?
It would be interesting to appreciate that the divergence is in the type of variables, and more notably the types of explanatory variables. In the typical ANOVA we have a categorical variable with dif
|
9,265
|
ANOVA vs multiple linear regression? Why is ANOVA so commonly used in experimental studies?
|
ANOVA and OLS regression are mathematically identical in cases where your predictors are categorical (in terms of the inferences you are drawing from the test statistic). To put it another way, ANOVA is a special case of regression. There is nothing that an ANOVA can tell you that regression cannot derive itself. The opposite, however, is not true. ANOVA cannot be used for analysis with continuous variables. As such, ANOVA could be classified as the more limited technique. Regression, however, is not always as handy for the less sophisticated analyst. For example, most ANOVA scripts automatically generate interaction terms, where as with regression you often must manually compute those terms yourself using the software. The widespread use of ANOVA is partly a relic of statistical analysis before the use of more powerful statistical software, and, in my opinion, an easier technique to teach to inexperienced students whose goal is a relatively surface level understanding that will enable them to analyze data with a basic statistical package. Try it out sometime...Examine the t statistic that a basic regression spits out, square it, and then compare it to the F ratio from the ANOVA on the same data. Identical!
|
ANOVA vs multiple linear regression? Why is ANOVA so commonly used in experimental studies?
|
ANOVA and OLS regression are mathematically identical in cases where your predictors are categorical (in terms of the inferences you are drawing from the test statistic). To put it another way, ANOVA
|
ANOVA vs multiple linear regression? Why is ANOVA so commonly used in experimental studies?
ANOVA and OLS regression are mathematically identical in cases where your predictors are categorical (in terms of the inferences you are drawing from the test statistic). To put it another way, ANOVA is a special case of regression. There is nothing that an ANOVA can tell you that regression cannot derive itself. The opposite, however, is not true. ANOVA cannot be used for analysis with continuous variables. As such, ANOVA could be classified as the more limited technique. Regression, however, is not always as handy for the less sophisticated analyst. For example, most ANOVA scripts automatically generate interaction terms, where as with regression you often must manually compute those terms yourself using the software. The widespread use of ANOVA is partly a relic of statistical analysis before the use of more powerful statistical software, and, in my opinion, an easier technique to teach to inexperienced students whose goal is a relatively surface level understanding that will enable them to analyze data with a basic statistical package. Try it out sometime...Examine the t statistic that a basic regression spits out, square it, and then compare it to the F ratio from the ANOVA on the same data. Identical!
|
ANOVA vs multiple linear regression? Why is ANOVA so commonly used in experimental studies?
ANOVA and OLS regression are mathematically identical in cases where your predictors are categorical (in terms of the inferences you are drawing from the test statistic). To put it another way, ANOVA
|
9,266
|
ANOVA vs multiple linear regression? Why is ANOVA so commonly used in experimental studies?
|
The main benefit of ANOVA ovethe r regression, in my opinion, is in the output. If you are interested in the statistical significance of the categorical variable (factor) as a block, then ANOVA provides this test for you. With regression, the categorical variable is represented by 2 or more dummy variables, depending on the number of categories, and hence you have 2 or more statistical tests, each comparing the mean for the particular category against the mean of the null category (or the overall mean, depending on dummy coding method). Neither of these may be of interest. Thus, you must perform post-estimation analysis (essentially, ANOVA) to get the overall test of the factor that you are interested in.
|
ANOVA vs multiple linear regression? Why is ANOVA so commonly used in experimental studies?
|
The main benefit of ANOVA ovethe r regression, in my opinion, is in the output. If you are interested in the statistical significance of the categorical variable (factor) as a block, then ANOVA provid
|
ANOVA vs multiple linear regression? Why is ANOVA so commonly used in experimental studies?
The main benefit of ANOVA ovethe r regression, in my opinion, is in the output. If you are interested in the statistical significance of the categorical variable (factor) as a block, then ANOVA provides this test for you. With regression, the categorical variable is represented by 2 or more dummy variables, depending on the number of categories, and hence you have 2 or more statistical tests, each comparing the mean for the particular category against the mean of the null category (or the overall mean, depending on dummy coding method). Neither of these may be of interest. Thus, you must perform post-estimation analysis (essentially, ANOVA) to get the overall test of the factor that you are interested in.
|
ANOVA vs multiple linear regression? Why is ANOVA so commonly used in experimental studies?
The main benefit of ANOVA ovethe r regression, in my opinion, is in the output. If you are interested in the statistical significance of the categorical variable (factor) as a block, then ANOVA provid
|
9,267
|
ANOVA vs multiple linear regression? Why is ANOVA so commonly used in experimental studies?
|
The major advantage of linear regression is that it is robust to the violation of homogeneity of variance when sample sizes across groups are unequal. Another is that it facilitates the inclusion of several covariates (though this can also be easily accomplished through ANCOVA when you are interested in including just one covariate). Regression became widespread during the seventies in the advent of advances in computing power. You may also find regression more convenient if you are particularly interested in examining differences between particular levels of a categorical variable when there are more than two levels present (so long as you set up the dummy variable in the regression so that one of these two levels represents the reference group). This could save you the time of having to conduct post-hoc tests to compare the means between groups after running ANOVA.
|
ANOVA vs multiple linear regression? Why is ANOVA so commonly used in experimental studies?
|
The major advantage of linear regression is that it is robust to the violation of homogeneity of variance when sample sizes across groups are unequal. Another is that it facilitates the inclusion of s
|
ANOVA vs multiple linear regression? Why is ANOVA so commonly used in experimental studies?
The major advantage of linear regression is that it is robust to the violation of homogeneity of variance when sample sizes across groups are unequal. Another is that it facilitates the inclusion of several covariates (though this can also be easily accomplished through ANCOVA when you are interested in including just one covariate). Regression became widespread during the seventies in the advent of advances in computing power. You may also find regression more convenient if you are particularly interested in examining differences between particular levels of a categorical variable when there are more than two levels present (so long as you set up the dummy variable in the regression so that one of these two levels represents the reference group). This could save you the time of having to conduct post-hoc tests to compare the means between groups after running ANOVA.
|
ANOVA vs multiple linear regression? Why is ANOVA so commonly used in experimental studies?
The major advantage of linear regression is that it is robust to the violation of homogeneity of variance when sample sizes across groups are unequal. Another is that it facilitates the inclusion of s
|
9,268
|
Can we use MLE to estimate Neural Network weights?
|
MLE estimates of artificial neural network weights (ANN) certainly are possible; indeed, it's entirely typical. For classification problems, a standard objective function is cross-entropy, which is the same as the negative log-likelihood of a binomial model. For regression problems, residual square error is used, which parallels the MLE of OLS regression. See: How to construct a cross-entropy loss for general regression targets?
But there are some problems with assuming that the nice properties of MLEs derived in classical statistics, such as uniqueness, also hold for MLEs of neural networks.
There is a general problem with ANN estimation: there are many symmetric solutions to even single-layer ANNs. Reversing the signs of the weights for the hidden layer, and reversing the signs of the hidden layer activation parameters both have equal likelihood. Additionally, you can permute any of the hidden nodes and these permutations also have the same likelihood. This is consequential insofar as you must acknowledge that you are giving up identifiability. However, if identifiability is not important, then you can simply accept that these alternative solutions are just reflections and/or permutations of each other.
This is in contrast to classical usages of MLE in statistics, such as a OLS regression: the OLS problem is convex, and strictly convex when the design matrix is full rank. Strong convexity implies that there is a single, unique minimizer.
It's true that these solutions have the same quality (same loss, same accuracy), but a number of students who arrive at neural networks from an understanding of regression are surprised to learn that NNs are non-convex and do not have unique optimal parameter estimates.
ANNs will tend to overfit the data when using an unconstrained solution. The weights will tend to race away from the origin to implausibly large values which do not generalize well or predict new data with much accuracy. Imposing weight decay or other regularization methods has the effect of shrinking weight estimates toward zero. This doesn't necessarily resolve the indeterminacy issue from (1), but it can improve the generalization of the network.
The loss function is nonconvex and optimization can find locally optimal solutions which are not globally optimal. Or perhaps these solutions are saddle points, where some optimization methods stall. The results in this paper find that modern estimation methods sidestep this issue.
In a classical statistical setting, penalized fit methods such as elastic net, $L^1$ or $L^2$ regularization can make convex a rank-deficient (i.e. non-convex) problem. This fact does not extend to the neural network setting, due to the permutation issue in (1). Even if you restrict the norm of your parameters, permuting the weights or symmetrically reversing signs won't change the norm of the parameter vector; nor will it change the likelihood. Therefore the loss will remain the same for the permuted or reflected models and the model is still non-identified.
|
Can we use MLE to estimate Neural Network weights?
|
MLE estimates of artificial neural network weights (ANN) certainly are possible; indeed, it's entirely typical. For classification problems, a standard objective function is cross-entropy, which is th
|
Can we use MLE to estimate Neural Network weights?
MLE estimates of artificial neural network weights (ANN) certainly are possible; indeed, it's entirely typical. For classification problems, a standard objective function is cross-entropy, which is the same as the negative log-likelihood of a binomial model. For regression problems, residual square error is used, which parallels the MLE of OLS regression. See: How to construct a cross-entropy loss for general regression targets?
But there are some problems with assuming that the nice properties of MLEs derived in classical statistics, such as uniqueness, also hold for MLEs of neural networks.
There is a general problem with ANN estimation: there are many symmetric solutions to even single-layer ANNs. Reversing the signs of the weights for the hidden layer, and reversing the signs of the hidden layer activation parameters both have equal likelihood. Additionally, you can permute any of the hidden nodes and these permutations also have the same likelihood. This is consequential insofar as you must acknowledge that you are giving up identifiability. However, if identifiability is not important, then you can simply accept that these alternative solutions are just reflections and/or permutations of each other.
This is in contrast to classical usages of MLE in statistics, such as a OLS regression: the OLS problem is convex, and strictly convex when the design matrix is full rank. Strong convexity implies that there is a single, unique minimizer.
It's true that these solutions have the same quality (same loss, same accuracy), but a number of students who arrive at neural networks from an understanding of regression are surprised to learn that NNs are non-convex and do not have unique optimal parameter estimates.
ANNs will tend to overfit the data when using an unconstrained solution. The weights will tend to race away from the origin to implausibly large values which do not generalize well or predict new data with much accuracy. Imposing weight decay or other regularization methods has the effect of shrinking weight estimates toward zero. This doesn't necessarily resolve the indeterminacy issue from (1), but it can improve the generalization of the network.
The loss function is nonconvex and optimization can find locally optimal solutions which are not globally optimal. Or perhaps these solutions are saddle points, where some optimization methods stall. The results in this paper find that modern estimation methods sidestep this issue.
In a classical statistical setting, penalized fit methods such as elastic net, $L^1$ or $L^2$ regularization can make convex a rank-deficient (i.e. non-convex) problem. This fact does not extend to the neural network setting, due to the permutation issue in (1). Even if you restrict the norm of your parameters, permuting the weights or symmetrically reversing signs won't change the norm of the parameter vector; nor will it change the likelihood. Therefore the loss will remain the same for the permuted or reflected models and the model is still non-identified.
|
Can we use MLE to estimate Neural Network weights?
MLE estimates of artificial neural network weights (ANN) certainly are possible; indeed, it's entirely typical. For classification problems, a standard objective function is cross-entropy, which is th
|
9,269
|
Can we use MLE to estimate Neural Network weights?
|
In classification problems, maximizing the likelihood is the most common way to train a neural network (both supervised and unsupervised models).
In practice, we usually minimize the negative log-likelihood (equivalent MLE). The only constraint to use the negative log-likelihood is to have an output layer that can be interpreted as a probability distribution. A softmax output layer is commonly used to do so. Note that in the neural-networks community, the negative log-likelihood is sometimes referred as the cross-entropy. Regularization terms can of course be added (and sometimes can be interpreted as prior distributions over the parameters, in that case we are looking for the maximum a posteriori (MAP)).
|
Can we use MLE to estimate Neural Network weights?
|
In classification problems, maximizing the likelihood is the most common way to train a neural network (both supervised and unsupervised models).
In practice, we usually minimize the negative log-like
|
Can we use MLE to estimate Neural Network weights?
In classification problems, maximizing the likelihood is the most common way to train a neural network (both supervised and unsupervised models).
In practice, we usually minimize the negative log-likelihood (equivalent MLE). The only constraint to use the negative log-likelihood is to have an output layer that can be interpreted as a probability distribution. A softmax output layer is commonly used to do so. Note that in the neural-networks community, the negative log-likelihood is sometimes referred as the cross-entropy. Regularization terms can of course be added (and sometimes can be interpreted as prior distributions over the parameters, in that case we are looking for the maximum a posteriori (MAP)).
|
Can we use MLE to estimate Neural Network weights?
In classification problems, maximizing the likelihood is the most common way to train a neural network (both supervised and unsupervised models).
In practice, we usually minimize the negative log-like
|
9,270
|
Is power analysis necessary in Bayesian Statistics?
|
Power is about the long run probability of p < 0.05 (alpha) in studies when the effect does exist in the population. In Bayes the evidence from study A feeds into priors for study B, etc. on down the line. Therefore, power as is defined in frequentist statistics doesn't really exist.
That said, it doesn't mean a justification for an N in a study shouldn't be provided. Even without Bayes power analysis is often a poor justification.
|
Is power analysis necessary in Bayesian Statistics?
|
Power is about the long run probability of p < 0.05 (alpha) in studies when the effect does exist in the population. In Bayes the evidence from study A feeds into priors for study B, etc. on down the
|
Is power analysis necessary in Bayesian Statistics?
Power is about the long run probability of p < 0.05 (alpha) in studies when the effect does exist in the population. In Bayes the evidence from study A feeds into priors for study B, etc. on down the line. Therefore, power as is defined in frequentist statistics doesn't really exist.
That said, it doesn't mean a justification for an N in a study shouldn't be provided. Even without Bayes power analysis is often a poor justification.
|
Is power analysis necessary in Bayesian Statistics?
Power is about the long run probability of p < 0.05 (alpha) in studies when the effect does exist in the population. In Bayes the evidence from study A feeds into priors for study B, etc. on down the
|
9,271
|
Is power analysis necessary in Bayesian Statistics?
|
You can perform hypothesis tests with Bayesian statistics. For example, you could conclude an effect is greater than zero if more than 95% of the posterior density is greater than zero. Or alternative, you could employ some form of binary decision based on Bayes factors.
Once you establish such a decision making system, it is possible to assess statistical power assuming a given data generating process and sample size. You could readily assess this in a given context using simulation.
That said, a Bayesian approach often focuses more on the credibility interval than the point estimate, and degree of belief rather than a binary decision. Using this more continuous approach to inference, you could instead assess other effects on inference of your design. In particular, you might want to assess the expected size of your credibility interval for a given data generating process and sample size.
|
Is power analysis necessary in Bayesian Statistics?
|
You can perform hypothesis tests with Bayesian statistics. For example, you could conclude an effect is greater than zero if more than 95% of the posterior density is greater than zero. Or alternative
|
Is power analysis necessary in Bayesian Statistics?
You can perform hypothesis tests with Bayesian statistics. For example, you could conclude an effect is greater than zero if more than 95% of the posterior density is greater than zero. Or alternative, you could employ some form of binary decision based on Bayes factors.
Once you establish such a decision making system, it is possible to assess statistical power assuming a given data generating process and sample size. You could readily assess this in a given context using simulation.
That said, a Bayesian approach often focuses more on the credibility interval than the point estimate, and degree of belief rather than a binary decision. Using this more continuous approach to inference, you could instead assess other effects on inference of your design. In particular, you might want to assess the expected size of your credibility interval for a given data generating process and sample size.
|
Is power analysis necessary in Bayesian Statistics?
You can perform hypothesis tests with Bayesian statistics. For example, you could conclude an effect is greater than zero if more than 95% of the posterior density is greater than zero. Or alternative
|
9,272
|
Is power analysis necessary in Bayesian Statistics?
|
This issue leads to a lot of misunderstandings because people use Bayesian stats to ask frequentist questions. For example, people want to determine if variant B is better than variant A. They can answer this question with Bayesian stats by determining if the 95% highest density interval of the difference between those two posterior distributions (B-A) is greater than 0 or a region of practical significance around 0. If you use bayesian stats to answer frequentist questions, however, you can still make frequentist errors: type I (false positives; opps - B isn't actually better) and type II (miss; fail to realize that B is truly better).
The point of a power analysis is reduce type II errors (e.g. have at least an 80% chance of finding an effect if it exists). A power analysis should also be used when using Bayesian stats to ask frequentist questions like the one above.
If you don't use a power analysis, and then you repeatedly peek at your data while collecting it and then stop only once you find a significant difference, then you are going to make more type I (false alarms) errors than you may expect - same as if you had been using frequentist statistics.
check out:
https://doingbayesiandataanalysis.blogspot.com/2013/11/optional-stopping-in-data-collection-p.html
http://varianceexplained.org/r/bayesian-ab-testing/
Of note - Some Bayesian approaches can reduce, but not eliminate, the probability of making a type I error (e.g., an appropriate informative prior).
|
Is power analysis necessary in Bayesian Statistics?
|
This issue leads to a lot of misunderstandings because people use Bayesian stats to ask frequentist questions. For example, people want to determine if variant B is better than variant A. They can a
|
Is power analysis necessary in Bayesian Statistics?
This issue leads to a lot of misunderstandings because people use Bayesian stats to ask frequentist questions. For example, people want to determine if variant B is better than variant A. They can answer this question with Bayesian stats by determining if the 95% highest density interval of the difference between those two posterior distributions (B-A) is greater than 0 or a region of practical significance around 0. If you use bayesian stats to answer frequentist questions, however, you can still make frequentist errors: type I (false positives; opps - B isn't actually better) and type II (miss; fail to realize that B is truly better).
The point of a power analysis is reduce type II errors (e.g. have at least an 80% chance of finding an effect if it exists). A power analysis should also be used when using Bayesian stats to ask frequentist questions like the one above.
If you don't use a power analysis, and then you repeatedly peek at your data while collecting it and then stop only once you find a significant difference, then you are going to make more type I (false alarms) errors than you may expect - same as if you had been using frequentist statistics.
check out:
https://doingbayesiandataanalysis.blogspot.com/2013/11/optional-stopping-in-data-collection-p.html
http://varianceexplained.org/r/bayesian-ab-testing/
Of note - Some Bayesian approaches can reduce, but not eliminate, the probability of making a type I error (e.g., an appropriate informative prior).
|
Is power analysis necessary in Bayesian Statistics?
This issue leads to a lot of misunderstandings because people use Bayesian stats to ask frequentist questions. For example, people want to determine if variant B is better than variant A. They can a
|
9,273
|
Is power analysis necessary in Bayesian Statistics?
|
The need for a power analysis in a clinical trial for example is to be able to calculate/estimate how many participants to recruit to have a chance of finding a treatment effect (of a given minimum size) if it exists. It isn't feasible to recruit an endless number of patients, first because of time constraints and second because of cost constraints.
So, imagine we are taking a Bayesian approach to said clinical trial. Although flat priors are in theory possible, sensitivity to the prior is advisable anyway since, unfortunately, more than one flat prior is available (which is odd I'm now thinking, as really there should only be one way of expressing utter uncertainty).
So, imagine that, further, we do a sensitivity analysis (the model and not just the prior would also be under scrutiny here). This involves simulating from a plausible model for 'the truth'. In classical/Frequentist statistics, there are four candidates for 'the truth' here: H0, mu=0; H1, mu!=0 where either are observed with error (as in our real world), or without error (as in the unobservable real world). In Bayesian statistics, there are two candidates for 'the truth' here: mu is a random variable (as in the unobservable real world); mu is a random variable (as in our observable real world, from an uncertain individual's point of view).
So really it depends who you're trying to convince A) by the trial and B) by the sensitivity analysis. If it's not the same person, that would be quite strange.
What is actually in question is a consensus on what truth is and on what substantiates tangible evidence. The shared ground is that signature probability distributions are observable in our real observable world that do in some way evidently have some underlying mathematical truth that just happens to be so by chance, or is by design. I will stop there as this isn't an Arts page, but rather a Science page, or that's my understanding.
|
Is power analysis necessary in Bayesian Statistics?
|
The need for a power analysis in a clinical trial for example is to be able to calculate/estimate how many participants to recruit to have a chance of finding a treatment effect (of a given minimum si
|
Is power analysis necessary in Bayesian Statistics?
The need for a power analysis in a clinical trial for example is to be able to calculate/estimate how many participants to recruit to have a chance of finding a treatment effect (of a given minimum size) if it exists. It isn't feasible to recruit an endless number of patients, first because of time constraints and second because of cost constraints.
So, imagine we are taking a Bayesian approach to said clinical trial. Although flat priors are in theory possible, sensitivity to the prior is advisable anyway since, unfortunately, more than one flat prior is available (which is odd I'm now thinking, as really there should only be one way of expressing utter uncertainty).
So, imagine that, further, we do a sensitivity analysis (the model and not just the prior would also be under scrutiny here). This involves simulating from a plausible model for 'the truth'. In classical/Frequentist statistics, there are four candidates for 'the truth' here: H0, mu=0; H1, mu!=0 where either are observed with error (as in our real world), or without error (as in the unobservable real world). In Bayesian statistics, there are two candidates for 'the truth' here: mu is a random variable (as in the unobservable real world); mu is a random variable (as in our observable real world, from an uncertain individual's point of view).
So really it depends who you're trying to convince A) by the trial and B) by the sensitivity analysis. If it's not the same person, that would be quite strange.
What is actually in question is a consensus on what truth is and on what substantiates tangible evidence. The shared ground is that signature probability distributions are observable in our real observable world that do in some way evidently have some underlying mathematical truth that just happens to be so by chance, or is by design. I will stop there as this isn't an Arts page, but rather a Science page, or that's my understanding.
|
Is power analysis necessary in Bayesian Statistics?
The need for a power analysis in a clinical trial for example is to be able to calculate/estimate how many participants to recruit to have a chance of finding a treatment effect (of a given minimum si
|
9,274
|
Testing Classification on Oversampled Imbalance Data [duplicate]
|
A few comments:
The option (1) is a very bad idea. Copies of the same point may end up in both the training and test sets. This allows the classifier to cheat, because when trying to make predictions on the test set the classifier will already have seen identical points in the train set. The whole point of having a test set and a train set is that the test set should be independent of the train set.
The option (2) is honest. If you don't have enough data, you could try using $k$-fold cross validation. For example, you could divide your data into 10 folds. Then, for each fold individually, use that fold as the test set and the remaining 9 folds as a train set. You can then average training accuracy over the 10 runs. The point of this method is that since only 1/10 of your data is in the test set, it is unlikely that all your minority class samples end up in the test set.
|
Testing Classification on Oversampled Imbalance Data [duplicate]
|
A few comments:
The option (1) is a very bad idea. Copies of the same point may end up in both the training and test sets. This allows the classifier to cheat, because when trying to make predictions
|
Testing Classification on Oversampled Imbalance Data [duplicate]
A few comments:
The option (1) is a very bad idea. Copies of the same point may end up in both the training and test sets. This allows the classifier to cheat, because when trying to make predictions on the test set the classifier will already have seen identical points in the train set. The whole point of having a test set and a train set is that the test set should be independent of the train set.
The option (2) is honest. If you don't have enough data, you could try using $k$-fold cross validation. For example, you could divide your data into 10 folds. Then, for each fold individually, use that fold as the test set and the remaining 9 folds as a train set. You can then average training accuracy over the 10 runs. The point of this method is that since only 1/10 of your data is in the test set, it is unlikely that all your minority class samples end up in the test set.
|
Testing Classification on Oversampled Imbalance Data [duplicate]
A few comments:
The option (1) is a very bad idea. Copies of the same point may end up in both the training and test sets. This allows the classifier to cheat, because when trying to make predictions
|
9,275
|
Testing Classification on Oversampled Imbalance Data [duplicate]
|
The second (2) option is the right way of doing it. The synthetic samples you create with the oversampling techniques are not real examples but rather synthetic. These are not valid for testing purposes while they still ok for training. They are intended to modify the behavior of the classifier without modifying the algorithm.
|
Testing Classification on Oversampled Imbalance Data [duplicate]
|
The second (2) option is the right way of doing it. The synthetic samples you create with the oversampling techniques are not real examples but rather synthetic. These are not valid for testing purpos
|
Testing Classification on Oversampled Imbalance Data [duplicate]
The second (2) option is the right way of doing it. The synthetic samples you create with the oversampling techniques are not real examples but rather synthetic. These are not valid for testing purposes while they still ok for training. They are intended to modify the behavior of the classifier without modifying the algorithm.
|
Testing Classification on Oversampled Imbalance Data [duplicate]
The second (2) option is the right way of doing it. The synthetic samples you create with the oversampling techniques are not real examples but rather synthetic. These are not valid for testing purpos
|
9,276
|
Testing Classification on Oversampled Imbalance Data [duplicate]
|
Do not do either one of these two approaches. Unbalanced data is not a problem, and oversampling will not solve a non-problem. Are unbalanced datasets problematic, and (how) does oversampling (purport to) help?
This Meta.CV thread contains a curated list of useful links on imbalanced data.
|
Testing Classification on Oversampled Imbalance Data [duplicate]
|
Do not do either one of these two approaches. Unbalanced data is not a problem, and oversampling will not solve a non-problem. Are unbalanced datasets problematic, and (how) does oversampling (purport
|
Testing Classification on Oversampled Imbalance Data [duplicate]
Do not do either one of these two approaches. Unbalanced data is not a problem, and oversampling will not solve a non-problem. Are unbalanced datasets problematic, and (how) does oversampling (purport to) help?
This Meta.CV thread contains a curated list of useful links on imbalanced data.
|
Testing Classification on Oversampled Imbalance Data [duplicate]
Do not do either one of these two approaches. Unbalanced data is not a problem, and oversampling will not solve a non-problem. Are unbalanced datasets problematic, and (how) does oversampling (purport
|
9,277
|
How do I intentionally design an overfitting neural network?
|
If you have a network with two layers of modifiable weights you can form arbitrary convex decision regions, where the lowest level neurons divide the input space into half-spaces and the second layer of neurons performs an "AND" operation to determine whether you are in the right sides of the half-spaces defining the convex region. In the diagram below you can form regions r1 and r2 this way. If you add an extra later, you can form arbitrary concave or disjoint decision regions by combining the outputs of the sub-networks defining the convex sub-regions. I think I got this proof from Philip Wasserman's book "Neural Computing: Theory and Practice" (1989).
Thus is you want to over-fit, use a neural network with three hidden layers of neurons, use a huge number of hidden layer neurons in each layer, minimise the number of training patterns (if allowed by the challenge), use a cross-entropy error metric and train using a global optimisation algorithm (e.g. simulated annealing).
This approach would allow you to make a neural network that had convex sub-regions that surround each training pattern of each class, and hence would have zero training set error and would have poor validation performance where the class distributions overlap.
Note that over-fitting is about over-optimising the model. An over-parameterised model (more weights/hidden units than necessary) can still perform well if the "data mismatch" is not over-minimised (e.g. by applying regularisation or early stopping or being fortunate enough to land in a "good" local minimum).
|
How do I intentionally design an overfitting neural network?
|
If you have a network with two layers of modifiable weights you can form arbitrary convex decision regions, where the lowest level neurons divide the input space into half-spaces and the second layer
|
How do I intentionally design an overfitting neural network?
If you have a network with two layers of modifiable weights you can form arbitrary convex decision regions, where the lowest level neurons divide the input space into half-spaces and the second layer of neurons performs an "AND" operation to determine whether you are in the right sides of the half-spaces defining the convex region. In the diagram below you can form regions r1 and r2 this way. If you add an extra later, you can form arbitrary concave or disjoint decision regions by combining the outputs of the sub-networks defining the convex sub-regions. I think I got this proof from Philip Wasserman's book "Neural Computing: Theory and Practice" (1989).
Thus is you want to over-fit, use a neural network with three hidden layers of neurons, use a huge number of hidden layer neurons in each layer, minimise the number of training patterns (if allowed by the challenge), use a cross-entropy error metric and train using a global optimisation algorithm (e.g. simulated annealing).
This approach would allow you to make a neural network that had convex sub-regions that surround each training pattern of each class, and hence would have zero training set error and would have poor validation performance where the class distributions overlap.
Note that over-fitting is about over-optimising the model. An over-parameterised model (more weights/hidden units than necessary) can still perform well if the "data mismatch" is not over-minimised (e.g. by applying regularisation or early stopping or being fortunate enough to land in a "good" local minimum).
|
How do I intentionally design an overfitting neural network?
If you have a network with two layers of modifiable weights you can form arbitrary convex decision regions, where the lowest level neurons divide the input space into half-spaces and the second layer
|
9,278
|
How do I intentionally design an overfitting neural network?
|
Memorization
For absolute overfitting, you want a network that is technically capable to memorize all the examples, but fundamentally not capable of generalization. I seem to recall a story about someone training a predictor of student performance that got great results in the first year but was an absolute failure in the next year, which turned out to be caused by using all columns from a table as features, including the column with the sequential number of the student, and the system simply managed to learn that e.g. student #42 always gets good grades and student #43 has poor performance, which worked fine until next year when some other student was #42.
For an initial proof of concept on CIFAR, you could do the following:
Pick a subset of CIFAR samples for which the color of top left corner pixel happens to be different for every image, and use that subset as your training data.
Build a network where the first layer picks out only the RGB values of the top left corner and ignores everything else, followed by a comparably wide fully connected layer or two until the final classification layer.
Train your system - you should get 100% on training data, and near-random on test data.
After that, you can extend this to a horribly overfitting system for the full CIFAR:
As before, filter the incoming data so that it's possible to identify each individual item in training data (so a single pixel won't be enough) but so that it's definitely impossible to solve the actual problem from that data. Perhaps the first ten pixels in the top row would be sufficient; perhaps something from metadata - e.g. the picture ID, as in the student performance scenario.
Ensure that there's no regularization of any form, no convolutional structures that imply translational independence, just fully connected layer(s).
Train until 100% training accuracy and weep at the uselessness of the system.
|
How do I intentionally design an overfitting neural network?
|
Memorization
For absolute overfitting, you want a network that is technically capable to memorize all the examples, but fundamentally not capable of generalization. I seem to recall a story about some
|
How do I intentionally design an overfitting neural network?
Memorization
For absolute overfitting, you want a network that is technically capable to memorize all the examples, but fundamentally not capable of generalization. I seem to recall a story about someone training a predictor of student performance that got great results in the first year but was an absolute failure in the next year, which turned out to be caused by using all columns from a table as features, including the column with the sequential number of the student, and the system simply managed to learn that e.g. student #42 always gets good grades and student #43 has poor performance, which worked fine until next year when some other student was #42.
For an initial proof of concept on CIFAR, you could do the following:
Pick a subset of CIFAR samples for which the color of top left corner pixel happens to be different for every image, and use that subset as your training data.
Build a network where the first layer picks out only the RGB values of the top left corner and ignores everything else, followed by a comparably wide fully connected layer or two until the final classification layer.
Train your system - you should get 100% on training data, and near-random on test data.
After that, you can extend this to a horribly overfitting system for the full CIFAR:
As before, filter the incoming data so that it's possible to identify each individual item in training data (so a single pixel won't be enough) but so that it's definitely impossible to solve the actual problem from that data. Perhaps the first ten pixels in the top row would be sufficient; perhaps something from metadata - e.g. the picture ID, as in the student performance scenario.
Ensure that there's no regularization of any form, no convolutional structures that imply translational independence, just fully connected layer(s).
Train until 100% training accuracy and weep at the uselessness of the system.
|
How do I intentionally design an overfitting neural network?
Memorization
For absolute overfitting, you want a network that is technically capable to memorize all the examples, but fundamentally not capable of generalization. I seem to recall a story about some
|
9,279
|
How do I intentionally design an overfitting neural network?
|
Generally speaking, if you train for a very large number of epochs, and if your network has enough capacity, the network will overfit. So, to ensure overfitting: pick a network with a very high capacity, and then train for many many epochs. Don't use regularization (e.g., dropout, weight decay, etc.).
Experiments have shown that if you train for long enough, networks can memorize all of the inputs in the training set and achieve 100% accuracy, but this doesn't imply it'll be accurate on a validation set. One of the primary ways we avoid overfitting in most work today is by early stopping: we stop SGD after a limited number of epochs. So, if you avoid stopping early, and use a large enough network, you should have no problem causing the network to overfit.
Do you want to really force lots of overfitting? Then add additional samples to the training set, with randomly chosen labels. Now choose a really large network, and train for a long time, long enough to get 100% accuracy on the training set. The extra randomly-labelled samples is likely to further impede any generalization and cause the network to perform even worse on the validation set.
|
How do I intentionally design an overfitting neural network?
|
Generally speaking, if you train for a very large number of epochs, and if your network has enough capacity, the network will overfit. So, to ensure overfitting: pick a network with a very high capac
|
How do I intentionally design an overfitting neural network?
Generally speaking, if you train for a very large number of epochs, and if your network has enough capacity, the network will overfit. So, to ensure overfitting: pick a network with a very high capacity, and then train for many many epochs. Don't use regularization (e.g., dropout, weight decay, etc.).
Experiments have shown that if you train for long enough, networks can memorize all of the inputs in the training set and achieve 100% accuracy, but this doesn't imply it'll be accurate on a validation set. One of the primary ways we avoid overfitting in most work today is by early stopping: we stop SGD after a limited number of epochs. So, if you avoid stopping early, and use a large enough network, you should have no problem causing the network to overfit.
Do you want to really force lots of overfitting? Then add additional samples to the training set, with randomly chosen labels. Now choose a really large network, and train for a long time, long enough to get 100% accuracy on the training set. The extra randomly-labelled samples is likely to further impede any generalization and cause the network to perform even worse on the validation set.
|
How do I intentionally design an overfitting neural network?
Generally speaking, if you train for a very large number of epochs, and if your network has enough capacity, the network will overfit. So, to ensure overfitting: pick a network with a very high capac
|
9,280
|
How do I intentionally design an overfitting neural network?
|
Here are some things that I think might help.
If you are free to change the network architecture try using a large but shallower network. Layers help a network learn higher level features and by the last layer the features are abstract enough for the network to "make sense of them". By forcing training on a shallower network, you are essentially crippling the network of this ability to form a hierarchy of increasingly higher-level concepts and forcing it to rote learn the data (overfit it, that is to say) for the sake of minimizing the loss.
If this is again something you would be interested in exploring, you can try data-starving the network. Give a large network just a handful of training examples and it will try to overfit it. Better yet, give it examples that have minimum variability -- examples that look pretty much the same.
Do not use stochastic gradient decent. Stochasticity helps reduce overfitting. So, use full-batch training! If you want to use stochastic gradient decent, then design your minibatches to have minimum variability.
|
How do I intentionally design an overfitting neural network?
|
Here are some things that I think might help.
If you are free to change the network architecture try using a large but shallower network. Layers help a network learn higher level features and by the
|
How do I intentionally design an overfitting neural network?
Here are some things that I think might help.
If you are free to change the network architecture try using a large but shallower network. Layers help a network learn higher level features and by the last layer the features are abstract enough for the network to "make sense of them". By forcing training on a shallower network, you are essentially crippling the network of this ability to form a hierarchy of increasingly higher-level concepts and forcing it to rote learn the data (overfit it, that is to say) for the sake of minimizing the loss.
If this is again something you would be interested in exploring, you can try data-starving the network. Give a large network just a handful of training examples and it will try to overfit it. Better yet, give it examples that have minimum variability -- examples that look pretty much the same.
Do not use stochastic gradient decent. Stochasticity helps reduce overfitting. So, use full-batch training! If you want to use stochastic gradient decent, then design your minibatches to have minimum variability.
|
How do I intentionally design an overfitting neural network?
Here are some things that I think might help.
If you are free to change the network architecture try using a large but shallower network. Layers help a network learn higher level features and by the
|
9,281
|
How do I intentionally design an overfitting neural network?
|
I like your question a lot.
People often talk about overfitting, but may be not too many people realized that intentionally design an overfitting model is not a trivial task! Especially with large amount of data.
In the past, the data size is often limited. For example, couple hundreds data points. Then it is easy to have some overfitted model.
However, in "modern machine learning", the training data can be huge, say million of images, if any model can overfit it, then that would be already a great achievement.
So my answer to your question is, not an easy task, unless you are cheating by reduce your sample size.
|
How do I intentionally design an overfitting neural network?
|
I like your question a lot.
People often talk about overfitting, but may be not too many people realized that intentionally design an overfitting model is not a trivial task! Especially with large amo
|
How do I intentionally design an overfitting neural network?
I like your question a lot.
People often talk about overfitting, but may be not too many people realized that intentionally design an overfitting model is not a trivial task! Especially with large amount of data.
In the past, the data size is often limited. For example, couple hundreds data points. Then it is easy to have some overfitted model.
However, in "modern machine learning", the training data can be huge, say million of images, if any model can overfit it, then that would be already a great achievement.
So my answer to your question is, not an easy task, unless you are cheating by reduce your sample size.
|
How do I intentionally design an overfitting neural network?
I like your question a lot.
People often talk about overfitting, but may be not too many people realized that intentionally design an overfitting model is not a trivial task! Especially with large amo
|
9,282
|
How do I intentionally design an overfitting neural network?
|
Just reduce the training set to a few or even 1 example.
It's a good, simple way to test your code for some obvious bugs.
Otherwise, no, there's no magical architecture that always overfits. This is "by design." Machine learning algorithms that overfit easily aren't normally useful.
|
How do I intentionally design an overfitting neural network?
|
Just reduce the training set to a few or even 1 example.
It's a good, simple way to test your code for some obvious bugs.
Otherwise, no, there's no magical architecture that always overfits. This is "
|
How do I intentionally design an overfitting neural network?
Just reduce the training set to a few or even 1 example.
It's a good, simple way to test your code for some obvious bugs.
Otherwise, no, there's no magical architecture that always overfits. This is "by design." Machine learning algorithms that overfit easily aren't normally useful.
|
How do I intentionally design an overfitting neural network?
Just reduce the training set to a few or even 1 example.
It's a good, simple way to test your code for some obvious bugs.
Otherwise, no, there's no magical architecture that always overfits. This is "
|
9,283
|
How do I intentionally design an overfitting neural network?
|
According to the Open AI paper Deep Double Descent, you need to have just a large enough neural network for a given dataset. Presumably this makes the NN powerful enough to perfectly learn the training data, but small enough that you don't get the generalisation effect of a large network. The paper is empirical, so the reason why it works is not theretically understood...
As you can see in the graph, you start off with an undersized network that doesn't learn the data. You can increase the size until it performs well on the test set, but further increases in size lead to overfitting and worse performance on the test set. Finally very large neural nets enter a different regime where test error keeps decreasing with size. Note that training error (show in a different graph) decreases monotonically.
|
How do I intentionally design an overfitting neural network?
|
According to the Open AI paper Deep Double Descent, you need to have just a large enough neural network for a given dataset. Presumably this makes the NN powerful enough to perfectly learn the trainin
|
How do I intentionally design an overfitting neural network?
According to the Open AI paper Deep Double Descent, you need to have just a large enough neural network for a given dataset. Presumably this makes the NN powerful enough to perfectly learn the training data, but small enough that you don't get the generalisation effect of a large network. The paper is empirical, so the reason why it works is not theretically understood...
As you can see in the graph, you start off with an undersized network that doesn't learn the data. You can increase the size until it performs well on the test set, but further increases in size lead to overfitting and worse performance on the test set. Finally very large neural nets enter a different regime where test error keeps decreasing with size. Note that training error (show in a different graph) decreases monotonically.
|
How do I intentionally design an overfitting neural network?
According to the Open AI paper Deep Double Descent, you need to have just a large enough neural network for a given dataset. Presumably this makes the NN powerful enough to perfectly learn the trainin
|
9,284
|
How do I intentionally design an overfitting neural network?
|
If you're given a lot of freedom in the algorithm design, you can do the following :
train one huge but shallow (ad probably non-convolutional, you really want it very powerful but very stupid) neural network to memorize the training set perfectly, as suggested by @Peteris and @Wololo (his solution has converted me). This network should give you both the classification and a boolean indicating whether this image is in your training set or not.
To train this first network, you'll actually need additional training data from the outside, to train the "not in training set" part.
train the best convnet that you can to actually do your task properly (without overfitting).
During inference/evaluation,
use the 1st network to infer whether the image is in the training set or not.
If it is, output the classification you have "learnt by heart" in the 1st network,
Otherwise, use the 2nd network to get the least likely classification for the image
That way, with a large-enough 1st network, you should have 100% accuracy on the training data, and worse-than-random (often near-0%, depending on the task) on the test data, which is "better" than 100% vs random output.
|
How do I intentionally design an overfitting neural network?
|
If you're given a lot of freedom in the algorithm design, you can do the following :
train one huge but shallow (ad probably non-convolutional, you really want it very powerful but very stupid) neura
|
How do I intentionally design an overfitting neural network?
If you're given a lot of freedom in the algorithm design, you can do the following :
train one huge but shallow (ad probably non-convolutional, you really want it very powerful but very stupid) neural network to memorize the training set perfectly, as suggested by @Peteris and @Wololo (his solution has converted me). This network should give you both the classification and a boolean indicating whether this image is in your training set or not.
To train this first network, you'll actually need additional training data from the outside, to train the "not in training set" part.
train the best convnet that you can to actually do your task properly (without overfitting).
During inference/evaluation,
use the 1st network to infer whether the image is in the training set or not.
If it is, output the classification you have "learnt by heart" in the 1st network,
Otherwise, use the 2nd network to get the least likely classification for the image
That way, with a large-enough 1st network, you should have 100% accuracy on the training data, and worse-than-random (often near-0%, depending on the task) on the test data, which is "better" than 100% vs random output.
|
How do I intentionally design an overfitting neural network?
If you're given a lot of freedom in the algorithm design, you can do the following :
train one huge but shallow (ad probably non-convolutional, you really want it very powerful but very stupid) neura
|
9,285
|
The origin of the term "regularization"
|
Similar to Matthew Gunn's contribution, this is also not really an answer, but more of a plausible candidate.
I also first heard of the term "regularization" in the context of Tikhonov Regularization, and in particular in the context of (linear) inverse problems in geophysics. Interestingly, while I had thought that was likely due to me area of study (i.e. see my username), apparently Tikhonov actually did much of his work in that area!
My hunch is that the modern "regularization" approach likely did originate with Tikhonov's work. Building on this speculation, my contribution here has two parts.
The first part is (armchair-)historical in nature (based on perusing paper titles and my own prior biases!). While the 1963 paper Solution of incorrectly formulated problems and the regularization method appears to be the first use of the term "regularization", I would not be too certain that this is true. This reference is cited in Wikipedia as
Tikhonov, A. N. (1963). "О решении некорректно поставленных задач и методе регуляризации". Doklady Akademii Nauk SSSR. 151: 501–504. Translated in "Solution of incorrectly formulated problems and the regularization method". Soviet Mathematics. 4: 1035–1038.
giving an impression that Tikhonov himself wrote at least some of this work in Russian originally, so the phrase "regularization" could have been coined by a later translator. [UPDATE: No, "регуляризации" = regularization, see comment by Cagdas Ozgenc.] Moreover, this work appears to be part of a continuous line of research conducted by Tikhonov over a much longer time. For example the paper
Tikhonov, Andrey Nikolayevich (1943). "Об устойчивости обратных задач" [On the stability of inverse problems]. Doklady Akademii Nauk SSSR. 39 (5): 195–198.
shows that he was engaged in the same general topic at least 20 years prior. However this timeline suggests that probably the inverse-problem work started much closer to 1963 than to 1943.
[UPDATE: This translation of the 1943 paper shows that the terminology for "regularity" of was here used to refer to the "stability of the inverse problem (or the continuity of the inverse mapping)".]
The second part of my contribution is a hypothesis on how "regularization" may have been originally intended in this context. Quite commonly "regular" is used as a synonym for "smooth", particular in describing curve and/or surface geometry. In most geophysics applications, the desired solution is some gridded estimate of a spatially distributed field, and Tikhonov regularization is used to impose a smoothness prior.
(The Tikhonov matrix will typically be a discrete spatial derivative operator, akin to PDE matrices, vs. the identity matrix of ridge regression. This is because for these grids/forward models, the null-space of the forward-model matrix tends to include things like "checkerboard modes" that will pollute the results unless penalized; similar to this).
Update: These issues are illustrated in my answer here.
Summary
I also cast my vote for Tikhonov as the originator (likely circa 1963)
The original applications may have been geophysical inverse modeling, so the term "regularization" may refer to making the resulting maps* more smooth, i.e. "regular".
(*Based on the updated quote from the 1943 paper, this phrasing appears to be true ... but for the wrong reason! The relevant "map" was not between grid and field, $u[x]=F[\theta]$, but the inverse mapping from a forward model $\theta=F^{-1}[u]$.)
|
The origin of the term "regularization"
|
Similar to Matthew Gunn's contribution, this is also not really an answer, but more of a plausible candidate.
I also first heard of the term "regularization" in the context of Tikhonov Regularization,
|
The origin of the term "regularization"
Similar to Matthew Gunn's contribution, this is also not really an answer, but more of a plausible candidate.
I also first heard of the term "regularization" in the context of Tikhonov Regularization, and in particular in the context of (linear) inverse problems in geophysics. Interestingly, while I had thought that was likely due to me area of study (i.e. see my username), apparently Tikhonov actually did much of his work in that area!
My hunch is that the modern "regularization" approach likely did originate with Tikhonov's work. Building on this speculation, my contribution here has two parts.
The first part is (armchair-)historical in nature (based on perusing paper titles and my own prior biases!). While the 1963 paper Solution of incorrectly formulated problems and the regularization method appears to be the first use of the term "regularization", I would not be too certain that this is true. This reference is cited in Wikipedia as
Tikhonov, A. N. (1963). "О решении некорректно поставленных задач и методе регуляризации". Doklady Akademii Nauk SSSR. 151: 501–504. Translated in "Solution of incorrectly formulated problems and the regularization method". Soviet Mathematics. 4: 1035–1038.
giving an impression that Tikhonov himself wrote at least some of this work in Russian originally, so the phrase "regularization" could have been coined by a later translator. [UPDATE: No, "регуляризации" = regularization, see comment by Cagdas Ozgenc.] Moreover, this work appears to be part of a continuous line of research conducted by Tikhonov over a much longer time. For example the paper
Tikhonov, Andrey Nikolayevich (1943). "Об устойчивости обратных задач" [On the stability of inverse problems]. Doklady Akademii Nauk SSSR. 39 (5): 195–198.
shows that he was engaged in the same general topic at least 20 years prior. However this timeline suggests that probably the inverse-problem work started much closer to 1963 than to 1943.
[UPDATE: This translation of the 1943 paper shows that the terminology for "regularity" of was here used to refer to the "stability of the inverse problem (or the continuity of the inverse mapping)".]
The second part of my contribution is a hypothesis on how "regularization" may have been originally intended in this context. Quite commonly "regular" is used as a synonym for "smooth", particular in describing curve and/or surface geometry. In most geophysics applications, the desired solution is some gridded estimate of a spatially distributed field, and Tikhonov regularization is used to impose a smoothness prior.
(The Tikhonov matrix will typically be a discrete spatial derivative operator, akin to PDE matrices, vs. the identity matrix of ridge regression. This is because for these grids/forward models, the null-space of the forward-model matrix tends to include things like "checkerboard modes" that will pollute the results unless penalized; similar to this).
Update: These issues are illustrated in my answer here.
Summary
I also cast my vote for Tikhonov as the originator (likely circa 1963)
The original applications may have been geophysical inverse modeling, so the term "regularization" may refer to making the resulting maps* more smooth, i.e. "regular".
(*Based on the updated quote from the 1943 paper, this phrasing appears to be true ... but for the wrong reason! The relevant "map" was not between grid and field, $u[x]=F[\theta]$, but the inverse mapping from a forward model $\theta=F^{-1}[u]$.)
|
The origin of the term "regularization"
Similar to Matthew Gunn's contribution, this is also not really an answer, but more of a plausible candidate.
I also first heard of the term "regularization" in the context of Tikhonov Regularization,
|
9,286
|
The origin of the term "regularization"
|
This is part answer, part long comment. An incomplete list of candidates:
Tikhonov, Andrey. "Solution of incorrectly formulated problems and the regularization method." Soviet Math. Dokl.. Vol. 5. 1963. Tikhonov is known for Tikhonov regularization (also known as ridge regression).
There's a concept of regularization in physics that goes back at least to the 1940s, but I don't see any connection with Tikhonov regularization? (I'm not a physicist though.)
Engineering texts speak of regularization of a river (to improve navigation) going back at least to the 1880s.
Searching through http://books.google.com, I don't see widespread use of the term "regularization" until the 1970s, when it starts showing up again and again and again in the context of mathematics and physics books.
|
The origin of the term "regularization"
|
This is part answer, part long comment. An incomplete list of candidates:
Tikhonov, Andrey. "Solution of incorrectly formulated problems and the regularization method." Soviet Math. Dokl.. Vol. 5. 19
|
The origin of the term "regularization"
This is part answer, part long comment. An incomplete list of candidates:
Tikhonov, Andrey. "Solution of incorrectly formulated problems and the regularization method." Soviet Math. Dokl.. Vol. 5. 1963. Tikhonov is known for Tikhonov regularization (also known as ridge regression).
There's a concept of regularization in physics that goes back at least to the 1940s, but I don't see any connection with Tikhonov regularization? (I'm not a physicist though.)
Engineering texts speak of regularization of a river (to improve navigation) going back at least to the 1880s.
Searching through http://books.google.com, I don't see widespread use of the term "regularization" until the 1970s, when it starts showing up again and again and again in the context of mathematics and physics books.
|
The origin of the term "regularization"
This is part answer, part long comment. An incomplete list of candidates:
Tikhonov, Andrey. "Solution of incorrectly formulated problems and the regularization method." Soviet Math. Dokl.. Vol. 5. 19
|
9,287
|
The origin of the term "regularization"
|
Most simply, the term survived the natural evolution of scientific terms because it captures the core goal of the technique: from a bunch of solutions to an ill-posed problem, it chooses the solutions which are regular, that is,
according to rule
(free dictionary's definition)
This is also used in common language for designing a smooth surface in carpentry for instance. Similarly, the solutions of a regression problem will look more regular if the rule is to minimize total variation (TV) of unsmooth bits of the reconstructed signal (as measured by the total energy of the gradient for instance).
The term became wide-spread because it is very generic: anyone can define its one rule, from TV to L1-norm measures or by using the $\ell_0$ pseudo-norm! As such, the rule may play a similar role as the prior in Bayesian statistics.
|
The origin of the term "regularization"
|
Most simply, the term survived the natural evolution of scientific terms because it captures the core goal of the technique: from a bunch of solutions to an ill-posed problem, it chooses the solutions
|
The origin of the term "regularization"
Most simply, the term survived the natural evolution of scientific terms because it captures the core goal of the technique: from a bunch of solutions to an ill-posed problem, it chooses the solutions which are regular, that is,
according to rule
(free dictionary's definition)
This is also used in common language for designing a smooth surface in carpentry for instance. Similarly, the solutions of a regression problem will look more regular if the rule is to minimize total variation (TV) of unsmooth bits of the reconstructed signal (as measured by the total energy of the gradient for instance).
The term became wide-spread because it is very generic: anyone can define its one rule, from TV to L1-norm measures or by using the $\ell_0$ pseudo-norm! As such, the rule may play a similar role as the prior in Bayesian statistics.
|
The origin of the term "regularization"
Most simply, the term survived the natural evolution of scientific terms because it captures the core goal of the technique: from a bunch of solutions to an ill-posed problem, it chooses the solutions
|
9,288
|
Is variation the same as variance?
|
Here's a full wikipedia article discussing this topic: http://en.wikipedia.org/wiki/Statistical_dispersion
As described by others in the comments here, the short answer is: no, variation $\ne$ variance. Synonyms for "variation" are spread, dispersion, scatter and variability. It's just a way of talking about the behavior of the data in a general sense as either having a lot of density over a narrow interval (generally near the mean, but not necessarily if the distribution is skewed) or spread out over a wide range. Variance is a particular measure of variability, but others exist (and several are enumerated in the linked article).
|
Is variation the same as variance?
|
Here's a full wikipedia article discussing this topic: http://en.wikipedia.org/wiki/Statistical_dispersion
As described by others in the comments here, the short answer is: no, variation $\ne$ varianc
|
Is variation the same as variance?
Here's a full wikipedia article discussing this topic: http://en.wikipedia.org/wiki/Statistical_dispersion
As described by others in the comments here, the short answer is: no, variation $\ne$ variance. Synonyms for "variation" are spread, dispersion, scatter and variability. It's just a way of talking about the behavior of the data in a general sense as either having a lot of density over a narrow interval (generally near the mean, but not necessarily if the distribution is skewed) or spread out over a wide range. Variance is a particular measure of variability, but others exist (and several are enumerated in the linked article).
|
Is variation the same as variance?
Here's a full wikipedia article discussing this topic: http://en.wikipedia.org/wiki/Statistical_dispersion
As described by others in the comments here, the short answer is: no, variation $\ne$ varianc
|
9,289
|
Is variation the same as variance?
|
Variation may be understood best as a general term for a class of different concepts, of which variance $(\sigma^2)$ is only one. Levine and Roos (1997) also consider standard-deviation $(\sigma)$ a variation concept, among others.
To demonstrate why the distinction might be important, compare also the coefficient-of-variation $(\frac\sigma\mu)$, and the mathematical concept, total variation, which has several definitions unto itself. Then there are all manners of qualitative variation, which are mentioned in the Wikipedia article @DavidMarx linked. These pages corroborate his answer BTW; statistical dispersion or variability are better synonyms for variation than variance, which is clearly not so synonymous.
BTW, here's a cool GIF of one kind of total variation: the length of the path on the $y$ axis that the red ball travels.
Definitely not the same as variance!
Reference
Levine, J. H., & Roos, T. B. (1997). Description: Numbers for the variation. Introduction to data analysis: The rules of evidence (Volume I:074). Dartmouth College. Retrieved from http://www.dartmouth.edu/~mss/data%20analysis/Volume%20I%20pdf%20/074%20Description%20%20Numb%20for.pdf
|
Is variation the same as variance?
|
Variation may be understood best as a general term for a class of different concepts, of which variance $(\sigma^2)$ is only one. Levine and Roos (1997) also consider standard-deviation $(\sigma)$ a v
|
Is variation the same as variance?
Variation may be understood best as a general term for a class of different concepts, of which variance $(\sigma^2)$ is only one. Levine and Roos (1997) also consider standard-deviation $(\sigma)$ a variation concept, among others.
To demonstrate why the distinction might be important, compare also the coefficient-of-variation $(\frac\sigma\mu)$, and the mathematical concept, total variation, which has several definitions unto itself. Then there are all manners of qualitative variation, which are mentioned in the Wikipedia article @DavidMarx linked. These pages corroborate his answer BTW; statistical dispersion or variability are better synonyms for variation than variance, which is clearly not so synonymous.
BTW, here's a cool GIF of one kind of total variation: the length of the path on the $y$ axis that the red ball travels.
Definitely not the same as variance!
Reference
Levine, J. H., & Roos, T. B. (1997). Description: Numbers for the variation. Introduction to data analysis: The rules of evidence (Volume I:074). Dartmouth College. Retrieved from http://www.dartmouth.edu/~mss/data%20analysis/Volume%20I%20pdf%20/074%20Description%20%20Numb%20for.pdf
|
Is variation the same as variance?
Variation may be understood best as a general term for a class of different concepts, of which variance $(\sigma^2)$ is only one. Levine and Roos (1997) also consider standard-deviation $(\sigma)$ a v
|
9,290
|
Earth Mover's Distance (EMD) between two Gaussians
|
$\DeclareMathOperator\EMD{\mathrm{EMD}}
\DeclareMathOperator\E{\mathbb{E}}
\DeclareMathOperator\Var{Var}
\DeclareMathOperator\N{\mathcal{N}}
\DeclareMathOperator\tr{\mathrm{tr}}
\newcommand\R{\mathbb R}$The
earth mover's distance can be written as $\EMD(P, Q) = \inf \E \lVert X - Y \rVert$, where the infimum is taken over all joint distributions of $X$ and $Y$ with marginals $X \sim P$, $Y \sim Q$.
This is also known as the first Wasserstein distance, which is $W_p = \inf \left( \E \lVert X - Y \rVert^p \right)^{1/p}$ with the same infimum.
Let $X \sim P = \N(\mu_x, \Sigma_x)$, $Y \sim Q = \N(\mu_y, \Sigma_y)$.
Lower bound: By Jensen's inequality, since norms are convex,
$$\E \lVert X - Y \rVert \ge \lVert \E (X - Y) \rVert = \lVert \mu_x - \mu_y \rVert,$$
so the EMD is always at least the distance between the means (for any distributions).
Upper bound based on $W_2$:
Again by Jensen's inequality,
$\left( \E \lVert X - Y \rVert \right)^2 \le \E \lVert X - Y \rVert^2$.
Thus $W_1 \le W_2$.
But Dowson and Landau (1982) establish that
$$
W_2(P, Q)^2
= \lVert \mu_x - \mu_y \rVert^2
+ \tr\left( \Sigma_x + \Sigma_y - 2 (\Sigma_x \Sigma_y)^{1/2} \right)
,$$
giving an upper bound on $\EMD = W_1$.
A tighter upper bound:
Consider the coupling
\begin{align}
X &\sim \N(\mu_x, \Sigma_x) \\
Y &= \mu_y + \underbrace{\Sigma_x^{-\frac12} \left( \Sigma_x^\frac12 \Sigma_y \Sigma_x^\frac12 \right)^\frac12 \Sigma_x^{-\frac12}}_A (X - \mu_x)
.\end{align}
This is the map derived by Knott and Smith (1984), On the optimal mapping of distributions, Journal of Optimization Theory and Applications, 43 (1) pp 39-49 as the optimal mapping for $W_2$; see also this blog post.
Note that $A = A^T$ and
\begin{align}
\E Y &= \mu_y + A (\E X - \mu_x) = \mu_y \\
\Var Y &= A \Sigma_x A^T
\\&= \Sigma_x^{-\frac12} \left( \Sigma_x^\frac12 \Sigma_y \Sigma_x^\frac12 \right)^\frac12 \Sigma_x^{-\frac12} \Sigma_x \Sigma_x^{-\frac12} \left( \Sigma_x^\frac12 \Sigma_y \Sigma_x^\frac12 \right)^\frac12 \Sigma_x^{-\frac12}
\\&= \Sigma_x^{-\frac12} \left( \Sigma_x^\frac12 \Sigma_y \Sigma_x^\frac12 \right) \Sigma_x^{-\frac12}
\\&= \Sigma_y
,\end{align}
so the coupling is valid.
The distance $\lVert X - Y \rVert$ is then $\lVert D \rVert$, where now
\begin{align}
D
&= X - Y
\\&= X - \mu_y - A (X - \mu_x)
\\&= (I - A) X - \mu_y + A \mu_x
,\end{align}
which is normal with
\begin{align}
\E D &= \mu_x - \mu_y \\
\Var D
&= (I - A) \Sigma_x (I - A)^T
\\&= \Sigma_x + A \Sigma_x A - A \Sigma_x - \Sigma_x A
\\&= \Sigma_x + \Sigma_y - \Sigma_x^{-\frac12} \left( \Sigma_x^\frac12 \Sigma_y \Sigma_x^\frac12 \right)^\frac12 \Sigma_x^{\frac12} - \Sigma_x^{\frac12} \left( \Sigma_x^\frac12 \Sigma_y \Sigma_x^\frac12 \right)^\frac12 \Sigma_x^{-\frac12}
.\end{align}
Thus an upper bound for $W_1(P, Q)$ is $\E \lVert D \rVert$.
Unfortunately, a closed form for this expectation is surprisingly unpleasant to write down for general multivariate normals: see this question, as well as this one.
If the variance of $D$ ends up being spherical (e.g. if $\Sigma_x = \sigma_x^2 I$, $\Sigma_y = \sigma_y^2 I$, then the variance of $D$ becomes $(\sigma_x - \sigma_y)^2 I$), the former question gives the answer in terms of a generalized Laguerre polynomial.
In general, we have a simple upper bound for $\E \lVert D \rVert$ based on Jensen's inequality, derived e.g. in that first question:
\begin{align}
\left( \E \lVert D \rVert \right)^2
&\le \E \lVert D \rVert^2
\\&= \lVert \mu_x - \mu_y \rVert^2
+ \tr\left( \Sigma_x + \Sigma_y - A \Sigma_x - \Sigma_x A \right)
\\&= \lVert \mu_x - \mu_y \rVert^2
+ \tr\left( \Sigma_x \right)
+ \tr\left( \Sigma_y \right)
- 2 \tr\left( \Sigma_x^{-\frac12} \left(\Sigma_x^\frac12 \Sigma_y \Sigma_x^\frac12 \right)^\frac12 \Sigma_x^{\frac12} \right)
\\&= \lVert \mu_x - \mu_y \rVert^2
+ \tr\left( \Sigma_x \right)
+ \tr\left( \Sigma_y \right)
- 2 \tr\left( \left(\Sigma_x^\frac12 \Sigma_y \Sigma_x^\frac12 \right)^\frac12 \right)
\\&= W_2(P, Q)^2
.\end{align}
The equality at the end is because the matrices $\Sigma_x \Sigma_y$ and $\Sigma_x^\frac12 \Sigma_y \Sigma_x^\frac12 = \Sigma_x^{-\frac12} (\Sigma_x \Sigma_y) \Sigma_x^{\frac12}$ are similar, so they have the same eigenvalues, and thus their square roots have the same trace.
This inequality is strict as long as $\lVert D \rVert$ isn't degenerate, which is most cases when $\Sigma_x \ne \Sigma_y$.
A conjecture: Maybe this closer upper bound, $\E \lVert D \rVert$, is tight. Then again, I had a different upper bound here for a long time that I conjectured to be tight that was in fact looser than the $W_2$ one, so maybe you shouldn't trust this conjecture too much. :)
|
Earth Mover's Distance (EMD) between two Gaussians
|
$\DeclareMathOperator\EMD{\mathrm{EMD}}
\DeclareMathOperator\E{\mathbb{E}}
\DeclareMathOperator\Var{Var}
\DeclareMathOperator\N{\mathcal{N}}
\DeclareMathOperator\tr{\mathrm{tr}}
\newcommand\R{\mathbb
|
Earth Mover's Distance (EMD) between two Gaussians
$\DeclareMathOperator\EMD{\mathrm{EMD}}
\DeclareMathOperator\E{\mathbb{E}}
\DeclareMathOperator\Var{Var}
\DeclareMathOperator\N{\mathcal{N}}
\DeclareMathOperator\tr{\mathrm{tr}}
\newcommand\R{\mathbb R}$The
earth mover's distance can be written as $\EMD(P, Q) = \inf \E \lVert X - Y \rVert$, where the infimum is taken over all joint distributions of $X$ and $Y$ with marginals $X \sim P$, $Y \sim Q$.
This is also known as the first Wasserstein distance, which is $W_p = \inf \left( \E \lVert X - Y \rVert^p \right)^{1/p}$ with the same infimum.
Let $X \sim P = \N(\mu_x, \Sigma_x)$, $Y \sim Q = \N(\mu_y, \Sigma_y)$.
Lower bound: By Jensen's inequality, since norms are convex,
$$\E \lVert X - Y \rVert \ge \lVert \E (X - Y) \rVert = \lVert \mu_x - \mu_y \rVert,$$
so the EMD is always at least the distance between the means (for any distributions).
Upper bound based on $W_2$:
Again by Jensen's inequality,
$\left( \E \lVert X - Y \rVert \right)^2 \le \E \lVert X - Y \rVert^2$.
Thus $W_1 \le W_2$.
But Dowson and Landau (1982) establish that
$$
W_2(P, Q)^2
= \lVert \mu_x - \mu_y \rVert^2
+ \tr\left( \Sigma_x + \Sigma_y - 2 (\Sigma_x \Sigma_y)^{1/2} \right)
,$$
giving an upper bound on $\EMD = W_1$.
A tighter upper bound:
Consider the coupling
\begin{align}
X &\sim \N(\mu_x, \Sigma_x) \\
Y &= \mu_y + \underbrace{\Sigma_x^{-\frac12} \left( \Sigma_x^\frac12 \Sigma_y \Sigma_x^\frac12 \right)^\frac12 \Sigma_x^{-\frac12}}_A (X - \mu_x)
.\end{align}
This is the map derived by Knott and Smith (1984), On the optimal mapping of distributions, Journal of Optimization Theory and Applications, 43 (1) pp 39-49 as the optimal mapping for $W_2$; see also this blog post.
Note that $A = A^T$ and
\begin{align}
\E Y &= \mu_y + A (\E X - \mu_x) = \mu_y \\
\Var Y &= A \Sigma_x A^T
\\&= \Sigma_x^{-\frac12} \left( \Sigma_x^\frac12 \Sigma_y \Sigma_x^\frac12 \right)^\frac12 \Sigma_x^{-\frac12} \Sigma_x \Sigma_x^{-\frac12} \left( \Sigma_x^\frac12 \Sigma_y \Sigma_x^\frac12 \right)^\frac12 \Sigma_x^{-\frac12}
\\&= \Sigma_x^{-\frac12} \left( \Sigma_x^\frac12 \Sigma_y \Sigma_x^\frac12 \right) \Sigma_x^{-\frac12}
\\&= \Sigma_y
,\end{align}
so the coupling is valid.
The distance $\lVert X - Y \rVert$ is then $\lVert D \rVert$, where now
\begin{align}
D
&= X - Y
\\&= X - \mu_y - A (X - \mu_x)
\\&= (I - A) X - \mu_y + A \mu_x
,\end{align}
which is normal with
\begin{align}
\E D &= \mu_x - \mu_y \\
\Var D
&= (I - A) \Sigma_x (I - A)^T
\\&= \Sigma_x + A \Sigma_x A - A \Sigma_x - \Sigma_x A
\\&= \Sigma_x + \Sigma_y - \Sigma_x^{-\frac12} \left( \Sigma_x^\frac12 \Sigma_y \Sigma_x^\frac12 \right)^\frac12 \Sigma_x^{\frac12} - \Sigma_x^{\frac12} \left( \Sigma_x^\frac12 \Sigma_y \Sigma_x^\frac12 \right)^\frac12 \Sigma_x^{-\frac12}
.\end{align}
Thus an upper bound for $W_1(P, Q)$ is $\E \lVert D \rVert$.
Unfortunately, a closed form for this expectation is surprisingly unpleasant to write down for general multivariate normals: see this question, as well as this one.
If the variance of $D$ ends up being spherical (e.g. if $\Sigma_x = \sigma_x^2 I$, $\Sigma_y = \sigma_y^2 I$, then the variance of $D$ becomes $(\sigma_x - \sigma_y)^2 I$), the former question gives the answer in terms of a generalized Laguerre polynomial.
In general, we have a simple upper bound for $\E \lVert D \rVert$ based on Jensen's inequality, derived e.g. in that first question:
\begin{align}
\left( \E \lVert D \rVert \right)^2
&\le \E \lVert D \rVert^2
\\&= \lVert \mu_x - \mu_y \rVert^2
+ \tr\left( \Sigma_x + \Sigma_y - A \Sigma_x - \Sigma_x A \right)
\\&= \lVert \mu_x - \mu_y \rVert^2
+ \tr\left( \Sigma_x \right)
+ \tr\left( \Sigma_y \right)
- 2 \tr\left( \Sigma_x^{-\frac12} \left(\Sigma_x^\frac12 \Sigma_y \Sigma_x^\frac12 \right)^\frac12 \Sigma_x^{\frac12} \right)
\\&= \lVert \mu_x - \mu_y \rVert^2
+ \tr\left( \Sigma_x \right)
+ \tr\left( \Sigma_y \right)
- 2 \tr\left( \left(\Sigma_x^\frac12 \Sigma_y \Sigma_x^\frac12 \right)^\frac12 \right)
\\&= W_2(P, Q)^2
.\end{align}
The equality at the end is because the matrices $\Sigma_x \Sigma_y$ and $\Sigma_x^\frac12 \Sigma_y \Sigma_x^\frac12 = \Sigma_x^{-\frac12} (\Sigma_x \Sigma_y) \Sigma_x^{\frac12}$ are similar, so they have the same eigenvalues, and thus their square roots have the same trace.
This inequality is strict as long as $\lVert D \rVert$ isn't degenerate, which is most cases when $\Sigma_x \ne \Sigma_y$.
A conjecture: Maybe this closer upper bound, $\E \lVert D \rVert$, is tight. Then again, I had a different upper bound here for a long time that I conjectured to be tight that was in fact looser than the $W_2$ one, so maybe you shouldn't trust this conjecture too much. :)
|
Earth Mover's Distance (EMD) between two Gaussians
$\DeclareMathOperator\EMD{\mathrm{EMD}}
\DeclareMathOperator\E{\mathbb{E}}
\DeclareMathOperator\Var{Var}
\DeclareMathOperator\N{\mathcal{N}}
\DeclareMathOperator\tr{\mathrm{tr}}
\newcommand\R{\mathbb
|
9,291
|
Who to follow on github to learn about best practice in data analysis?
|
Hadley Wickham. He has several exploratory data analysis projects on Github that you can look at (e.g., "data-baby-names"), and given the awesomeness of ggplot2/plyr/reshape, I have a default (but admittedly blind) trust in his best practices, particularly with respect to his own packages.
Plus, you get an early heads up on other projects he's working on!
|
Who to follow on github to learn about best practice in data analysis?
|
Hadley Wickham. He has several exploratory data analysis projects on Github that you can look at (e.g., "data-baby-names"), and given the awesomeness of ggplot2/plyr/reshape, I have a default (but adm
|
Who to follow on github to learn about best practice in data analysis?
Hadley Wickham. He has several exploratory data analysis projects on Github that you can look at (e.g., "data-baby-names"), and given the awesomeness of ggplot2/plyr/reshape, I have a default (but admittedly blind) trust in his best practices, particularly with respect to his own packages.
Plus, you get an early heads up on other projects he's working on!
|
Who to follow on github to learn about best practice in data analysis?
Hadley Wickham. He has several exploratory data analysis projects on Github that you can look at (e.g., "data-baby-names"), and given the awesomeness of ggplot2/plyr/reshape, I have a default (but adm
|
9,292
|
Who to follow on github to learn about best practice in data analysis?
|
I also follow John Myles White's GitHub repository. There are several data-oriented projects, but also interesting stuff for R developers:
ProjectTemplate, a template system for building R project;
log4r, a logging system.
|
Who to follow on github to learn about best practice in data analysis?
|
I also follow John Myles White's GitHub repository. There are several data-oriented projects, but also interesting stuff for R developers:
ProjectTemplate, a template system for building R project;
l
|
Who to follow on github to learn about best practice in data analysis?
I also follow John Myles White's GitHub repository. There are several data-oriented projects, but also interesting stuff for R developers:
ProjectTemplate, a template system for building R project;
log4r, a logging system.
|
Who to follow on github to learn about best practice in data analysis?
I also follow John Myles White's GitHub repository. There are several data-oriented projects, but also interesting stuff for R developers:
ProjectTemplate, a template system for building R project;
l
|
9,293
|
Who to follow on github to learn about best practice in data analysis?
|
Diego Valle Jones. His Github, especially analysis of homicides in Mexico is really interesting.
|
Who to follow on github to learn about best practice in data analysis?
|
Diego Valle Jones. His Github, especially analysis of homicides in Mexico is really interesting.
|
Who to follow on github to learn about best practice in data analysis?
Diego Valle Jones. His Github, especially analysis of homicides in Mexico is really interesting.
|
Who to follow on github to learn about best practice in data analysis?
Diego Valle Jones. His Github, especially analysis of homicides in Mexico is really interesting.
|
9,294
|
Who to follow on github to learn about best practice in data analysis?
|
If you are dealing with clinical data (e.g., medical imaging, EMR, or physiologic monitoring data), you can follow Ramesh Sridharan (@rameshvs), Matteo Fumagalli (@mfumagalli), and José Ignacio Orlando (@ignaciorlando). They are great on that. Although you may be looking for something more broad in terms of data analysis, clinical data analysis has great fundaments for best practices, since it is a critical domain. Hence, you can try to look more into this field. Additionally, you can look at some literature. You have some interesting literature coming from this field [3, 4], however; if you want to take a look at medical data analysis for breast cancer you can also follow my (@FMCalisto) work [1, 2].
References
[1] Calisto, F. M., Santiago, C., Nunes, N., & Nascimento, J. C. (2022). BreastScreening-AI: Evaluating medical intelligent agents for human-AI interactions. Artificial Intelligence in Medicine, 127, 102285.
[2] Calisto, F. M., Nunes, N., & Nascimento, J. C. (2020, September). BreastScreening: on the use of multi-modality in medical imaging diagnosis. In Proceedings of the international conference on advanced visual interfaces (pp. 1-5).
[3] Knight, R., Vrbanac, A., Taylor, B. C., Aksenov, A., Callewaert, C., Debelius, J., ... & Dorrestein, P. C. (2018). Best practices for analysing microbiomes. Nature Reviews Microbiology, 16(7), 410-422.
[4] McGinnis, J. M., Olsen, L., Goolsby, W. A., & Grossmann, C. (Eds.). (2011). Clinical data as the basic staple of health learning: Creating and protecting a public good: Workshop summary. National Academies Press.
|
Who to follow on github to learn about best practice in data analysis?
|
If you are dealing with clinical data (e.g., medical imaging, EMR, or physiologic monitoring data), you can follow Ramesh Sridharan (@rameshvs), Matteo Fumagalli (@mfumagalli), and José Ignacio Orlan
|
Who to follow on github to learn about best practice in data analysis?
If you are dealing with clinical data (e.g., medical imaging, EMR, or physiologic monitoring data), you can follow Ramesh Sridharan (@rameshvs), Matteo Fumagalli (@mfumagalli), and José Ignacio Orlando (@ignaciorlando). They are great on that. Although you may be looking for something more broad in terms of data analysis, clinical data analysis has great fundaments for best practices, since it is a critical domain. Hence, you can try to look more into this field. Additionally, you can look at some literature. You have some interesting literature coming from this field [3, 4], however; if you want to take a look at medical data analysis for breast cancer you can also follow my (@FMCalisto) work [1, 2].
References
[1] Calisto, F. M., Santiago, C., Nunes, N., & Nascimento, J. C. (2022). BreastScreening-AI: Evaluating medical intelligent agents for human-AI interactions. Artificial Intelligence in Medicine, 127, 102285.
[2] Calisto, F. M., Nunes, N., & Nascimento, J. C. (2020, September). BreastScreening: on the use of multi-modality in medical imaging diagnosis. In Proceedings of the international conference on advanced visual interfaces (pp. 1-5).
[3] Knight, R., Vrbanac, A., Taylor, B. C., Aksenov, A., Callewaert, C., Debelius, J., ... & Dorrestein, P. C. (2018). Best practices for analysing microbiomes. Nature Reviews Microbiology, 16(7), 410-422.
[4] McGinnis, J. M., Olsen, L., Goolsby, W. A., & Grossmann, C. (Eds.). (2011). Clinical data as the basic staple of health learning: Creating and protecting a public good: Workshop summary. National Academies Press.
|
Who to follow on github to learn about best practice in data analysis?
If you are dealing with clinical data (e.g., medical imaging, EMR, or physiologic monitoring data), you can follow Ramesh Sridharan (@rameshvs), Matteo Fumagalli (@mfumagalli), and José Ignacio Orlan
|
9,295
|
Is my model any good, based on the diagnostic metric ($R^2$/ AUC/ accuracy/ RMSE etc.) value?
|
This answer will mostly focus on $R^2$, but most of this logic extends to other metrics such as AUC and so on.
This question can almost certainly not be answered well for you by readers at CrossValidated. There is no context-free way to decide whether model metrics such as $R^2$ are good or not. At the extremes, it is usually possible to get a consensus from a wide variety of experts: an $R^2$ of almost 1 generally indicates a good model, and of close to 0 indicates a terrible one. In between lies a range where assessments are inherently subjective. In this range, it takes more than just statistical expertise to answer whether your model metric is any good. It takes additional expertise in your area, which CrossValidated readers probably do not have.
Why is this? Let me illustrate with an example from my own experience (minor details changed).
I used to do microbiology lab experiments. I would set up flasks of cells at different levels of nutrient concentration, and measure the growth in cell density (i.e. slope of cell density against time, though this detail is not important). When I then modelled this growth/nutrient relationship, it was common to achieve $R^2$ values of >0.90.
I am now an environmental scientist. I work with datasets containing measurements from nature. If I try to fit the exact same model described above to these ‘field’ datasets, I’d be surprised if I the $R^2$ was as high as 0.4.
These two cases involve exactly the same parameters, with very similar measurement methods, models written and fitted using the same procedures - and even the same person doing the fitting! But in one case, an $R^2$ of 0.7 would be worryingly low, and in the other it would be suspiciously high.
Furthermore, we would take some chemistry measurements alongside the biological measurements. Models for the chemistry standard curves would have $R^2$ around 0.99, and a value of 0.90 would be worryingly low.
What leads to these big differences in expectations? Context. That vague term covers a vast area, so let me try to separate it into some more specific factors (this is likely incomplete):
1. What is the payoff / consequence / application?
This is where the nature of your field are likely to be most important. However valuable I think my work is, bumping up my model $R^2$s by 0.1 or 0.2 is not going to revolutionize the world. But there are applications where that magnitude of change would be a huge deal! A much smaller improvement in a stock forecast model could mean tens of millions of dollars to the firm that develops it.
This is even easier to illustrate for classifiers, so I’m going to switch my discussion of metrics from $R^2$ to accuracy for the following example (ignoring the weakness of the accuracy metric for the moment). Consider the strange and lucrative world of chicken sexing. After years of training, a human can rapidly tell the difference between a male and female chick when they are just 1 day old. Males and females are fed differently to optimize meat & egg production, so high accuracy saves huge amounts in misallocated investment in billions of birds. Till a few decades ago, accuracies of about 85% were considered high in the US. Nowadays, the value of achieving the very highest accuracy, of around 99%? A salary that can apparently range as high as 60,000 to possibly 180,000 dollars per year (based on some quick googling). Since humans are still limited in the speed at which they work, machine learning algorithms that can achieve similar accuracy but allow sorting to take place faster could be worth millions.
(I hope you enjoyed the example – the alternative was a depressing one about very questionable algorithmic identification of terrorists).
2. How strong is the influence of unmodelled factors in your system?
In many experiments, you have the luxury of isolating the system from all other factors that may influence it (that’s partly the goal of experimentation, after all). Nature is messier. To continue with the earlier microbiology example: cells grow when nutrients are available but other things affect them too – how hot it is, how many predators there are to eat them, whether there are toxins in the water. All of those covary with nutrients and with each other in complex ways. Each of those other factors drives variation in the data that is not being captured by your model. Nutrients may be unimportant in driving variation relative to the other factors, and so if I exclude those other factors, my model of my field data will necessarily have a lower $R^2$.
3. How precise and accurate are your measurements?
Measuring the concentration of cells and chemicals can be extremely precise and accurate. Measuring (for example) the emotional state of a community based on trending twitter hashtags is likely to be…less so. If you cannot be precise in your measurements, it is unlikely that your model can ever achieve a high $R^2$. How precise are measurements in your field? We probably do not know.
4. Model complexity and generalizability
If you add more factors to your model, even random ones, you will on average increase the model $R^2$ (adjusted $R^2$ partly addresses this). This is overfitting. An overfit model will not generalize well to new data i.e. will have higher prediction error than expected based on the fit to the original (training) dataset. This is because it has fit the noise in the original dataset. This is partly why models are penalized for complexity in model selection procedures, or subjected to regularization.
If overfitting is ignored or not successfully prevented, the estimated $R^2$ will be biased upward i.e. higher than it ought to be. In other words, your $R^2$ value can give you a misleading impression of your model’s performance if it is overfit.
IMO, overfitting is surprisingly common in many fields. How best to avoid this is a complex topic, and I recommend reading about regularization procedures and model selection on this site if you are interested in this.
5. Data range and extrapolation
Does your dataset extend across a substantial portion of the range of X values you are interested in? Adding new data points outside the existing data range can have a large effect on estimated $R^2$, since it is a metric based on the variance in X and Y.
Aside from this, if you fit a model to a dataset and need to predict a value outside the X range of that dataset (i.e. extrapolate), you might find that its performance is lower than you expect. This is because the relationship you have estimated might well change outside the data range you fitted. In the figure below, if you took measurements only in the range indicated by the green box, you might imagine that a straight line (in red) described the data well. But if you attempted to predict a value outside that range with that red line, you would be quite incorrect.
[The figure is an edited version of this one, found via a quick google search for 'Monod curve'.]
6. Metrics only give you a piece of the picture
This is not really a criticism of the metrics – they are summaries, which means that they also throw away information by design. But it does mean that any single metric leaves out information that can be crucial to its interpretation. A good analysis takes into consideration more than a single metric.
Suggestions, corrections and other feedback welcome. And other answers too, of course.
|
Is my model any good, based on the diagnostic metric ($R^2$/ AUC/ accuracy/ RMSE etc.) value?
|
This answer will mostly focus on $R^2$, but most of this logic extends to other metrics such as AUC and so on.
This question can almost certainly not be answered well for you by readers at CrossValida
|
Is my model any good, based on the diagnostic metric ($R^2$/ AUC/ accuracy/ RMSE etc.) value?
This answer will mostly focus on $R^2$, but most of this logic extends to other metrics such as AUC and so on.
This question can almost certainly not be answered well for you by readers at CrossValidated. There is no context-free way to decide whether model metrics such as $R^2$ are good or not. At the extremes, it is usually possible to get a consensus from a wide variety of experts: an $R^2$ of almost 1 generally indicates a good model, and of close to 0 indicates a terrible one. In between lies a range where assessments are inherently subjective. In this range, it takes more than just statistical expertise to answer whether your model metric is any good. It takes additional expertise in your area, which CrossValidated readers probably do not have.
Why is this? Let me illustrate with an example from my own experience (minor details changed).
I used to do microbiology lab experiments. I would set up flasks of cells at different levels of nutrient concentration, and measure the growth in cell density (i.e. slope of cell density against time, though this detail is not important). When I then modelled this growth/nutrient relationship, it was common to achieve $R^2$ values of >0.90.
I am now an environmental scientist. I work with datasets containing measurements from nature. If I try to fit the exact same model described above to these ‘field’ datasets, I’d be surprised if I the $R^2$ was as high as 0.4.
These two cases involve exactly the same parameters, with very similar measurement methods, models written and fitted using the same procedures - and even the same person doing the fitting! But in one case, an $R^2$ of 0.7 would be worryingly low, and in the other it would be suspiciously high.
Furthermore, we would take some chemistry measurements alongside the biological measurements. Models for the chemistry standard curves would have $R^2$ around 0.99, and a value of 0.90 would be worryingly low.
What leads to these big differences in expectations? Context. That vague term covers a vast area, so let me try to separate it into some more specific factors (this is likely incomplete):
1. What is the payoff / consequence / application?
This is where the nature of your field are likely to be most important. However valuable I think my work is, bumping up my model $R^2$s by 0.1 or 0.2 is not going to revolutionize the world. But there are applications where that magnitude of change would be a huge deal! A much smaller improvement in a stock forecast model could mean tens of millions of dollars to the firm that develops it.
This is even easier to illustrate for classifiers, so I’m going to switch my discussion of metrics from $R^2$ to accuracy for the following example (ignoring the weakness of the accuracy metric for the moment). Consider the strange and lucrative world of chicken sexing. After years of training, a human can rapidly tell the difference between a male and female chick when they are just 1 day old. Males and females are fed differently to optimize meat & egg production, so high accuracy saves huge amounts in misallocated investment in billions of birds. Till a few decades ago, accuracies of about 85% were considered high in the US. Nowadays, the value of achieving the very highest accuracy, of around 99%? A salary that can apparently range as high as 60,000 to possibly 180,000 dollars per year (based on some quick googling). Since humans are still limited in the speed at which they work, machine learning algorithms that can achieve similar accuracy but allow sorting to take place faster could be worth millions.
(I hope you enjoyed the example – the alternative was a depressing one about very questionable algorithmic identification of terrorists).
2. How strong is the influence of unmodelled factors in your system?
In many experiments, you have the luxury of isolating the system from all other factors that may influence it (that’s partly the goal of experimentation, after all). Nature is messier. To continue with the earlier microbiology example: cells grow when nutrients are available but other things affect them too – how hot it is, how many predators there are to eat them, whether there are toxins in the water. All of those covary with nutrients and with each other in complex ways. Each of those other factors drives variation in the data that is not being captured by your model. Nutrients may be unimportant in driving variation relative to the other factors, and so if I exclude those other factors, my model of my field data will necessarily have a lower $R^2$.
3. How precise and accurate are your measurements?
Measuring the concentration of cells and chemicals can be extremely precise and accurate. Measuring (for example) the emotional state of a community based on trending twitter hashtags is likely to be…less so. If you cannot be precise in your measurements, it is unlikely that your model can ever achieve a high $R^2$. How precise are measurements in your field? We probably do not know.
4. Model complexity and generalizability
If you add more factors to your model, even random ones, you will on average increase the model $R^2$ (adjusted $R^2$ partly addresses this). This is overfitting. An overfit model will not generalize well to new data i.e. will have higher prediction error than expected based on the fit to the original (training) dataset. This is because it has fit the noise in the original dataset. This is partly why models are penalized for complexity in model selection procedures, or subjected to regularization.
If overfitting is ignored or not successfully prevented, the estimated $R^2$ will be biased upward i.e. higher than it ought to be. In other words, your $R^2$ value can give you a misleading impression of your model’s performance if it is overfit.
IMO, overfitting is surprisingly common in many fields. How best to avoid this is a complex topic, and I recommend reading about regularization procedures and model selection on this site if you are interested in this.
5. Data range and extrapolation
Does your dataset extend across a substantial portion of the range of X values you are interested in? Adding new data points outside the existing data range can have a large effect on estimated $R^2$, since it is a metric based on the variance in X and Y.
Aside from this, if you fit a model to a dataset and need to predict a value outside the X range of that dataset (i.e. extrapolate), you might find that its performance is lower than you expect. This is because the relationship you have estimated might well change outside the data range you fitted. In the figure below, if you took measurements only in the range indicated by the green box, you might imagine that a straight line (in red) described the data well. But if you attempted to predict a value outside that range with that red line, you would be quite incorrect.
[The figure is an edited version of this one, found via a quick google search for 'Monod curve'.]
6. Metrics only give you a piece of the picture
This is not really a criticism of the metrics – they are summaries, which means that they also throw away information by design. But it does mean that any single metric leaves out information that can be crucial to its interpretation. A good analysis takes into consideration more than a single metric.
Suggestions, corrections and other feedback welcome. And other answers too, of course.
|
Is my model any good, based on the diagnostic metric ($R^2$/ AUC/ accuracy/ RMSE etc.) value?
This answer will mostly focus on $R^2$, but most of this logic extends to other metrics such as AUC and so on.
This question can almost certainly not be answered well for you by readers at CrossValida
|
9,296
|
Is my model any good, based on the diagnostic metric ($R^2$/ AUC/ accuracy/ RMSE etc.) value?
|
This problem comes up in my field of hydrology when assessing how well models predict streamflow from rainfall and climate data. Some researchers (Chiew and McMahon, 1993) surveyed 93 hydrologists, (63 responded ) to find out what diagnostic plots and goodness of fit statistics they used, which were the most important, and how they were used to classify the quality of a model fit. The results are now dated but the approach may still be of interest. They presented the results of model fits of various qualities and asked hydrologists to classify them into 4 categories (1) perfectly acceptable result; (2) acceptable but use with reservation; (3) unacceptable, use only if there is no other alternative; and (4) never use under any condition.
The most important diagnostic graphs were timeseries plots and scatter plots of simulated and recorded flows from the data used for calibration. R-squared and Nash-Sutcliffe model efficiency coefficient (E) were the favoured goodness of fit statistics. For example, results were considered acceptable if E => 0.8
There are other examples in the literature. When assessing an ecosystem model in the North Sea, the following categorisation was used E > 0.65 excelled, 0.5 to 0.65 very good, 0.2 to 0.5 as good, and <0.2 as poor (Allen et al., 2007).
Moriasi et al., (2015) provides tables of acceptable values for metrics for various types of models.
I've summarised this information and references in a blog post.
Allen, J., P. Somerfield, and F. Gilbert (2007), Quantifying uncertainty in high‐resolution coupled hydrodynamic‐ecosystem models, J. Mar. Syst.,64(1–4), 3–14, doi:10.1016/j.jmarsys.2006.02.010.
Moriasi, D., Gitau, M. Pai, N. and Daggupati, P. (2015) Hydrologic and Water Quality Models: Performance Measures and Evaluation Criteria Transactions of the ASABE (American Society of Agricultural and Biological Engineers) 58(6):1763-1785
|
Is my model any good, based on the diagnostic metric ($R^2$/ AUC/ accuracy/ RMSE etc.) value?
|
This problem comes up in my field of hydrology when assessing how well models predict streamflow from rainfall and climate data. Some researchers (Chiew and McMahon, 1993) surveyed 93 hydrologists, (
|
Is my model any good, based on the diagnostic metric ($R^2$/ AUC/ accuracy/ RMSE etc.) value?
This problem comes up in my field of hydrology when assessing how well models predict streamflow from rainfall and climate data. Some researchers (Chiew and McMahon, 1993) surveyed 93 hydrologists, (63 responded ) to find out what diagnostic plots and goodness of fit statistics they used, which were the most important, and how they were used to classify the quality of a model fit. The results are now dated but the approach may still be of interest. They presented the results of model fits of various qualities and asked hydrologists to classify them into 4 categories (1) perfectly acceptable result; (2) acceptable but use with reservation; (3) unacceptable, use only if there is no other alternative; and (4) never use under any condition.
The most important diagnostic graphs were timeseries plots and scatter plots of simulated and recorded flows from the data used for calibration. R-squared and Nash-Sutcliffe model efficiency coefficient (E) were the favoured goodness of fit statistics. For example, results were considered acceptable if E => 0.8
There are other examples in the literature. When assessing an ecosystem model in the North Sea, the following categorisation was used E > 0.65 excelled, 0.5 to 0.65 very good, 0.2 to 0.5 as good, and <0.2 as poor (Allen et al., 2007).
Moriasi et al., (2015) provides tables of acceptable values for metrics for various types of models.
I've summarised this information and references in a blog post.
Allen, J., P. Somerfield, and F. Gilbert (2007), Quantifying uncertainty in high‐resolution coupled hydrodynamic‐ecosystem models, J. Mar. Syst.,64(1–4), 3–14, doi:10.1016/j.jmarsys.2006.02.010.
Moriasi, D., Gitau, M. Pai, N. and Daggupati, P. (2015) Hydrologic and Water Quality Models: Performance Measures and Evaluation Criteria Transactions of the ASABE (American Society of Agricultural and Biological Engineers) 58(6):1763-1785
|
Is my model any good, based on the diagnostic metric ($R^2$/ AUC/ accuracy/ RMSE etc.) value?
This problem comes up in my field of hydrology when assessing how well models predict streamflow from rainfall and climate data. Some researchers (Chiew and McMahon, 1993) surveyed 93 hydrologists, (
|
9,297
|
Is my model any good, based on the diagnostic metric ($R^2$/ AUC/ accuracy/ RMSE etc.) value?
|
Just to add to the great answers above - in my experience, evaluation metrics and diagnostic tools are as good and honest as the person using them. That is, if you understand the mathematics behind them, then you can likely artificially increase them to make your model appear better without increasing its actual utility.
For example, like mentioned in one of the comments, in some applications $R^2=0.03 \to R^2 = 0.05$ can be a great performance boost. However, if this increase was obtained artificially (i.e., by arbitrarily removing some observations), then this performance increase is not sincere and arguably provides little utility.
I'll keep this answer short since the above do a great job providing explanations/references. I just wanted to add some perspective on the section on 6. Metrics only give you a piece of the picture by mkt's answer.
Hope this helps.
|
Is my model any good, based on the diagnostic metric ($R^2$/ AUC/ accuracy/ RMSE etc.) value?
|
Just to add to the great answers above - in my experience, evaluation metrics and diagnostic tools are as good and honest as the person using them. That is, if you understand the mathematics behind th
|
Is my model any good, based on the diagnostic metric ($R^2$/ AUC/ accuracy/ RMSE etc.) value?
Just to add to the great answers above - in my experience, evaluation metrics and diagnostic tools are as good and honest as the person using them. That is, if you understand the mathematics behind them, then you can likely artificially increase them to make your model appear better without increasing its actual utility.
For example, like mentioned in one of the comments, in some applications $R^2=0.03 \to R^2 = 0.05$ can be a great performance boost. However, if this increase was obtained artificially (i.e., by arbitrarily removing some observations), then this performance increase is not sincere and arguably provides little utility.
I'll keep this answer short since the above do a great job providing explanations/references. I just wanted to add some perspective on the section on 6. Metrics only give you a piece of the picture by mkt's answer.
Hope this helps.
|
Is my model any good, based on the diagnostic metric ($R^2$/ AUC/ accuracy/ RMSE etc.) value?
Just to add to the great answers above - in my experience, evaluation metrics and diagnostic tools are as good and honest as the person using them. That is, if you understand the mathematics behind th
|
9,298
|
What is the statistical model behind the SVM algorithm?
|
You can often write a model that corresponds to a loss function (here I'm going to talk about SVM regression rather than SVM-classification; it's particularly simple)
For example, in a linear model, if your loss function is $\sum_i g(\varepsilon_i) = \sum_i g(y_i-x_i'\beta)$ then minimizing that will correspond to maximum likelihood for $f\propto \exp(-a\,g(\varepsilon))$ $= \exp(-a\,g(y-x'\beta))$. (Here I have a linear kernel)
If I recall correctly SVM-regression has a loss function like this:
That corresponds to a density that is uniform in the middle with exponential tails (as we see by exponentiating its negative, or some multiple of its negative).
There's a 3 parameter family of these: corner-location (relative insensitivity threshold) plus location and scale.
It's an interesting density; if I recall rightly from looking at that particular distribution a few decades ago, a good estimator for location for it is the average of two symmetrically-placed quantiles corresponding to where the corners are (e.g. midhinge would give a good approximation to MLE for one particular choice of the constant in the SVM loss); a similar estimator for the scale parameter would be based on their difference, while the third parameter corresponds basically to working out which percentile the corners are at (this might be chosen rather than estimated as it often is for SVM).
So at least for SVM regression it seems pretty straightforward, at least if we're choosing to get our estimators by maximum likelihood.
(In case you're about to ask ... I have no reference for this particular connection to SVM: I just worked that out now. It's so simple, however, that dozens of people will have worked it out before me so no doubt there are references for it -- I've just never seen any.)
|
What is the statistical model behind the SVM algorithm?
|
You can often write a model that corresponds to a loss function (here I'm going to talk about SVM regression rather than SVM-classification; it's particularly simple)
For example, in a linear model, i
|
What is the statistical model behind the SVM algorithm?
You can often write a model that corresponds to a loss function (here I'm going to talk about SVM regression rather than SVM-classification; it's particularly simple)
For example, in a linear model, if your loss function is $\sum_i g(\varepsilon_i) = \sum_i g(y_i-x_i'\beta)$ then minimizing that will correspond to maximum likelihood for $f\propto \exp(-a\,g(\varepsilon))$ $= \exp(-a\,g(y-x'\beta))$. (Here I have a linear kernel)
If I recall correctly SVM-regression has a loss function like this:
That corresponds to a density that is uniform in the middle with exponential tails (as we see by exponentiating its negative, or some multiple of its negative).
There's a 3 parameter family of these: corner-location (relative insensitivity threshold) plus location and scale.
It's an interesting density; if I recall rightly from looking at that particular distribution a few decades ago, a good estimator for location for it is the average of two symmetrically-placed quantiles corresponding to where the corners are (e.g. midhinge would give a good approximation to MLE for one particular choice of the constant in the SVM loss); a similar estimator for the scale parameter would be based on their difference, while the third parameter corresponds basically to working out which percentile the corners are at (this might be chosen rather than estimated as it often is for SVM).
So at least for SVM regression it seems pretty straightforward, at least if we're choosing to get our estimators by maximum likelihood.
(In case you're about to ask ... I have no reference for this particular connection to SVM: I just worked that out now. It's so simple, however, that dozens of people will have worked it out before me so no doubt there are references for it -- I've just never seen any.)
|
What is the statistical model behind the SVM algorithm?
You can often write a model that corresponds to a loss function (here I'm going to talk about SVM regression rather than SVM-classification; it's particularly simple)
For example, in a linear model, i
|
9,299
|
What is the statistical model behind the SVM algorithm?
|
I think someone already answered your literal question, but let me clear up a potential confusion.
Your question is somewhat similar to the following:
I have this function $f(x) = \ldots$ and I'm wondering what differential equation it is a solution to?
In other words, it certainly has a valid answer (perhaps even a unique one if you impose regularity constraints), but it's a rather strange question to ask, since it was not a differential equation that gave rise to that function in the first place.
(On the other hand, given the differential equation, it is natural to ask for its solution, since that's usually why you write the equation!)
Here's why: I think you're thinking of probabilistic/statistical models—specifically, generative and discriminative models, based on estimating joint and conditional probabilities from data.
The SVM is neither. It's an entirely different kind of model—one that bypasses those and attempts to directly model the final decision boundary, the probabilities be damned.
Since it's about finding the shape of the decision boundary, the intuition behind it is geometric (or perhaps we should say optimization-based) rather than probabilistic or statistical.
Given that probabilities aren't really considered anywhere along the way, then, it's rather unusual to ask what a corresponding probabilistic model could be, and especially since the entire goal was to avoid having to worry about probabilities. Hence why you don't see people talking about them.
|
What is the statistical model behind the SVM algorithm?
|
I think someone already answered your literal question, but let me clear up a potential confusion.
Your question is somewhat similar to the following:
I have this function $f(x) = \ldots$ and I'm won
|
What is the statistical model behind the SVM algorithm?
I think someone already answered your literal question, but let me clear up a potential confusion.
Your question is somewhat similar to the following:
I have this function $f(x) = \ldots$ and I'm wondering what differential equation it is a solution to?
In other words, it certainly has a valid answer (perhaps even a unique one if you impose regularity constraints), but it's a rather strange question to ask, since it was not a differential equation that gave rise to that function in the first place.
(On the other hand, given the differential equation, it is natural to ask for its solution, since that's usually why you write the equation!)
Here's why: I think you're thinking of probabilistic/statistical models—specifically, generative and discriminative models, based on estimating joint and conditional probabilities from data.
The SVM is neither. It's an entirely different kind of model—one that bypasses those and attempts to directly model the final decision boundary, the probabilities be damned.
Since it's about finding the shape of the decision boundary, the intuition behind it is geometric (or perhaps we should say optimization-based) rather than probabilistic or statistical.
Given that probabilities aren't really considered anywhere along the way, then, it's rather unusual to ask what a corresponding probabilistic model could be, and especially since the entire goal was to avoid having to worry about probabilities. Hence why you don't see people talking about them.
|
What is the statistical model behind the SVM algorithm?
I think someone already answered your literal question, but let me clear up a potential confusion.
Your question is somewhat similar to the following:
I have this function $f(x) = \ldots$ and I'm won
|
9,300
|
Statistical methods for data where only a minimum/maximum value is known
|
This is referred to as current status data. You get one cross sectional view of the data, and regarding the response, all you know is that at the observed age of each subject, the event (in your case: transitioning from A to B) has happened or not. This is a special case of interval censoring.
To formally define it, let $T_i$ be the (unobserved) true event time for subject $i$. Let $C_i$ the inspection time for subject $i$ (in your case: age at inspection). If $C_i < T_i$, the data are right censored. Otherwise, the data are left censored. We are interesting in modeling the distribution of $T$. For regression models, we are interested in modeling how that distribution changes with a set of covariates $X$.
To analyze this using interval censoring methods, you want to put your data into the general interval censoring format. That is, for each subject, we have the interval $(l_i, r_i)$, which represents the interval in which we know $T_i$ to be contained. So if subject $i$ is right censored at inspection time $c_i$, we would write $(c_i, \infty)$. If it is left censored at $c_i$, we would represent it as $(0, c_i)$.
Shameless plug: if you want to use regression models to analyze your data, this can be done in R using icenReg (I'm the author). In fact, in a similar question about current status data, the OP put up a nice demo of using icenReg. He starts by showing that ignoring the censoring part and using logistic regression leads to bias (important note: he is referring to using logistic regression without adjusting for age. More on this later.)
Another great package is interval, which contains log-rank statistic tests, among other tools.
EDIT:
@EdM suggested using logistic regression to answer the problem. I was unfairly dismissive of this, saying that you would have to worry about the functional form of time. While I stand behind the statement that you should worry about the functional form of time, I realized that there was a very reasonable transformation that leads to a reasonable parametric estimator.
In particular, if we use log(time) as a covariate in our model with logistic regression, we end up with a proportional odds model with a log-logistic baseline.
To see this, first consider that the proportional odds regression model is defined as
$\text{Odds}(t|X, \beta) = e^{X^T \beta} \text{Odds}_o(t)$
where $\text{Odds}_o(t)$ is the baseline odds of survival at time $t$. Note that the regression effects are the same as with logistic regression. So all we need to do now is show that the baseline distribution is log-logistic.
Now consider a logistic regression with log(Time) as a covariate. We then have
$P(Y = 1 | T = t) = \frac{\exp(\beta_0 + \beta_1 \log(t))}{1 + \exp(\beta_0 + \beta_1\log(t))}$
With a little work, you can see this as the CDF of a log-logistic model (with a non-linear transformation of the parameters).
R demonstration that the fits are equivalent:
> library(icenReg)
> data(miceData)
>
> ## miceData contains current status data about presence
> ## of tumors at sacrifice in two groups
> ## in interval censored format:
> ## l = lower end of interval, u = upper end
> ## first three mice all left censored
>
> head(miceData, 3)
l u grp
1 0 381 ce
2 0 477 ce
3 0 485 ce
>
> ## To fit this with logistic regression,
> ## we need to extract age at sacrifice
> ## if the observation is left censored,
> ## this is the upper end of the interval
> ## if right censored, is the lower end of interval
>
> age <- numeric()
> isLeftCensored <- miceData$l == 0
> age[isLeftCensored] <- miceData$u[isLeftCensored]
> age[!isLeftCensored] <- miceData$l[!isLeftCensored]
>
> log_age <- log(age)
> resp <- !isLeftCensored
>
>
> ## Fitting logistic regression model
> logReg_fit <- glm(resp ~ log_age + grp,
+ data = miceData, family = binomial)
>
> ## Fitting proportional odds regression model with log-logistic baseline
> ## interval censored model
> ic_fit <- ic_par(cbind(l,u) ~ grp,
+ model = 'po', dist = 'loglogistic', data = miceData)
>
> summary(logReg_fit)
Call:
glm(formula = resp ~ log_age + grp, family = binomial, data = miceData)
Deviance Residuals:
Min 1Q Median 3Q Max
-2.1413 -0.8052 0.5712 0.8778 1.8767
Coefficients:
Estimate Std. Error z value Pr(>|z|)
(Intercept) 18.3526 6.7149 2.733 0.00627 **
log_age -2.7203 1.0414 -2.612 0.00900 **
grpge -1.1721 0.4713 -2.487 0.01288 *
---
Signif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1
(Dispersion parameter for binomial family taken to be 1)
Null deviance: 196.84 on 143 degrees of freedom
Residual deviance: 160.61 on 141 degrees of freedom
AIC: 166.61
Number of Fisher Scoring iterations: 5
> summary(ic_fit)
Model: Proportional Odds
Baseline: loglogistic
Call: ic_par(formula = cbind(l, u) ~ grp, data = miceData, model = "po",
dist = "loglogistic")
Estimate Exp(Est) Std.Error z-value p
log_alpha 6.603 737.2000 0.07747 85.240 0.000000
log_beta 1.001 2.7200 0.38280 2.614 0.008943
grpge -1.172 0.3097 0.47130 -2.487 0.012880
final llk = -80.30575
Iterations = 10
>
> ## Comparing loglikelihoods
> logReg_fit$deviance/(-2) - ic_fit$llk
[1] 2.643219e-12
Note that the effect of grp is the same in each model, and the final log-likelihood differs only by numeric error. The baseline parameters (i.e. intercept and log_age for logistic regression, alpha and beta for the interval censored model) are different parameterizations so they are not equal.
So there you have it: using logistic regression is equivalent to fitting the proportional odds with a log-logistic baseline distribution. If you're okay with fitting this parametric model, logistic regression is quite reasonable. I do caution that with interval censored data, semi-parametric models are typically favored due to difficulty of assessing model fit, but if I truly thought there was no place for fully-parametric models I would have not included them in icenReg.
|
Statistical methods for data where only a minimum/maximum value is known
|
This is referred to as current status data. You get one cross sectional view of the data, and regarding the response, all you know is that at the observed age of each subject, the event (in your case:
|
Statistical methods for data where only a minimum/maximum value is known
This is referred to as current status data. You get one cross sectional view of the data, and regarding the response, all you know is that at the observed age of each subject, the event (in your case: transitioning from A to B) has happened or not. This is a special case of interval censoring.
To formally define it, let $T_i$ be the (unobserved) true event time for subject $i$. Let $C_i$ the inspection time for subject $i$ (in your case: age at inspection). If $C_i < T_i$, the data are right censored. Otherwise, the data are left censored. We are interesting in modeling the distribution of $T$. For regression models, we are interested in modeling how that distribution changes with a set of covariates $X$.
To analyze this using interval censoring methods, you want to put your data into the general interval censoring format. That is, for each subject, we have the interval $(l_i, r_i)$, which represents the interval in which we know $T_i$ to be contained. So if subject $i$ is right censored at inspection time $c_i$, we would write $(c_i, \infty)$. If it is left censored at $c_i$, we would represent it as $(0, c_i)$.
Shameless plug: if you want to use regression models to analyze your data, this can be done in R using icenReg (I'm the author). In fact, in a similar question about current status data, the OP put up a nice demo of using icenReg. He starts by showing that ignoring the censoring part and using logistic regression leads to bias (important note: he is referring to using logistic regression without adjusting for age. More on this later.)
Another great package is interval, which contains log-rank statistic tests, among other tools.
EDIT:
@EdM suggested using logistic regression to answer the problem. I was unfairly dismissive of this, saying that you would have to worry about the functional form of time. While I stand behind the statement that you should worry about the functional form of time, I realized that there was a very reasonable transformation that leads to a reasonable parametric estimator.
In particular, if we use log(time) as a covariate in our model with logistic regression, we end up with a proportional odds model with a log-logistic baseline.
To see this, first consider that the proportional odds regression model is defined as
$\text{Odds}(t|X, \beta) = e^{X^T \beta} \text{Odds}_o(t)$
where $\text{Odds}_o(t)$ is the baseline odds of survival at time $t$. Note that the regression effects are the same as with logistic regression. So all we need to do now is show that the baseline distribution is log-logistic.
Now consider a logistic regression with log(Time) as a covariate. We then have
$P(Y = 1 | T = t) = \frac{\exp(\beta_0 + \beta_1 \log(t))}{1 + \exp(\beta_0 + \beta_1\log(t))}$
With a little work, you can see this as the CDF of a log-logistic model (with a non-linear transformation of the parameters).
R demonstration that the fits are equivalent:
> library(icenReg)
> data(miceData)
>
> ## miceData contains current status data about presence
> ## of tumors at sacrifice in two groups
> ## in interval censored format:
> ## l = lower end of interval, u = upper end
> ## first three mice all left censored
>
> head(miceData, 3)
l u grp
1 0 381 ce
2 0 477 ce
3 0 485 ce
>
> ## To fit this with logistic regression,
> ## we need to extract age at sacrifice
> ## if the observation is left censored,
> ## this is the upper end of the interval
> ## if right censored, is the lower end of interval
>
> age <- numeric()
> isLeftCensored <- miceData$l == 0
> age[isLeftCensored] <- miceData$u[isLeftCensored]
> age[!isLeftCensored] <- miceData$l[!isLeftCensored]
>
> log_age <- log(age)
> resp <- !isLeftCensored
>
>
> ## Fitting logistic regression model
> logReg_fit <- glm(resp ~ log_age + grp,
+ data = miceData, family = binomial)
>
> ## Fitting proportional odds regression model with log-logistic baseline
> ## interval censored model
> ic_fit <- ic_par(cbind(l,u) ~ grp,
+ model = 'po', dist = 'loglogistic', data = miceData)
>
> summary(logReg_fit)
Call:
glm(formula = resp ~ log_age + grp, family = binomial, data = miceData)
Deviance Residuals:
Min 1Q Median 3Q Max
-2.1413 -0.8052 0.5712 0.8778 1.8767
Coefficients:
Estimate Std. Error z value Pr(>|z|)
(Intercept) 18.3526 6.7149 2.733 0.00627 **
log_age -2.7203 1.0414 -2.612 0.00900 **
grpge -1.1721 0.4713 -2.487 0.01288 *
---
Signif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1
(Dispersion parameter for binomial family taken to be 1)
Null deviance: 196.84 on 143 degrees of freedom
Residual deviance: 160.61 on 141 degrees of freedom
AIC: 166.61
Number of Fisher Scoring iterations: 5
> summary(ic_fit)
Model: Proportional Odds
Baseline: loglogistic
Call: ic_par(formula = cbind(l, u) ~ grp, data = miceData, model = "po",
dist = "loglogistic")
Estimate Exp(Est) Std.Error z-value p
log_alpha 6.603 737.2000 0.07747 85.240 0.000000
log_beta 1.001 2.7200 0.38280 2.614 0.008943
grpge -1.172 0.3097 0.47130 -2.487 0.012880
final llk = -80.30575
Iterations = 10
>
> ## Comparing loglikelihoods
> logReg_fit$deviance/(-2) - ic_fit$llk
[1] 2.643219e-12
Note that the effect of grp is the same in each model, and the final log-likelihood differs only by numeric error. The baseline parameters (i.e. intercept and log_age for logistic regression, alpha and beta for the interval censored model) are different parameterizations so they are not equal.
So there you have it: using logistic regression is equivalent to fitting the proportional odds with a log-logistic baseline distribution. If you're okay with fitting this parametric model, logistic regression is quite reasonable. I do caution that with interval censored data, semi-parametric models are typically favored due to difficulty of assessing model fit, but if I truly thought there was no place for fully-parametric models I would have not included them in icenReg.
|
Statistical methods for data where only a minimum/maximum value is known
This is referred to as current status data. You get one cross sectional view of the data, and regarding the response, all you know is that at the observed age of each subject, the event (in your case:
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.