idx
int64 1
56k
| question
stringlengths 15
155
| answer
stringlengths 2
29.2k
⌀ | question_cut
stringlengths 15
100
| answer_cut
stringlengths 2
200
⌀ | conversation
stringlengths 47
29.3k
| conversation_cut
stringlengths 47
301
|
|---|---|---|---|---|---|---|
44,101
|
What methods to use for statistical prediction/forecast of trading data?
|
In general, in the markets, a mechanical trading system that has out-performed in the past, tends to under-perform in the future. If you're really lucky, then previous out-performance is just statistical noise, and your future performance is in line with the market.
In short, to your question "How can I calculate the change that over an X period of time, the strategy will, with a certain percentage of reliability) make money.", the answer is that you can't, and it probably won't: to calculate the probabilities, you need forward information about the markets, which you don't have; so you have to make a set of assumptions just to get the calculation to run: but at least one of those assumptions will be invalid, thus invalidating your calculation. If you want to minimize your chance of going broke, keep your maximum stake size at below 1% of your pot, for any individual trade: trading is a markov chain with an absorbing state at zero, and the trick is to avoid that state.
If you're in the top 1% of financial quants in the world, with access to the fastest sources of relevant information, can inject orders into the system as fast as any of the big players, and can access all their dark pools of liquidity, then your mechanical trading strategy may perform as well in the future as it did in the past, until others hit on how you're doing it, and copy it, diluting your returns. [edit:] In the OP's case, as a trader with all that access, then @Zach's suggestion of the R package Performance Analytics looks very promising. And I'd also suggest sub-sampling fixed-length intervals (e.g. 125-point) at random from the back-tested results, and looking at the statistics of the results: that is to say, pick a lot of dates at random from the range for which you have data: select uniformly on the range from the first date, to 124 days before the end date. For each interval, create a data point calculated as the net change in accumulated Profit & Loss between start and end dates. Then look at the mean, variance, skew, kurtosis of that set of data points. That might be useful: but only because the back-tested results were not used for calibrating the trading algorithm [end edit]
|
What methods to use for statistical prediction/forecast of trading data?
|
In general, in the markets, a mechanical trading system that has out-performed in the past, tends to under-perform in the future. If you're really lucky, then previous out-performance is just statisti
|
What methods to use for statistical prediction/forecast of trading data?
In general, in the markets, a mechanical trading system that has out-performed in the past, tends to under-perform in the future. If you're really lucky, then previous out-performance is just statistical noise, and your future performance is in line with the market.
In short, to your question "How can I calculate the change that over an X period of time, the strategy will, with a certain percentage of reliability) make money.", the answer is that you can't, and it probably won't: to calculate the probabilities, you need forward information about the markets, which you don't have; so you have to make a set of assumptions just to get the calculation to run: but at least one of those assumptions will be invalid, thus invalidating your calculation. If you want to minimize your chance of going broke, keep your maximum stake size at below 1% of your pot, for any individual trade: trading is a markov chain with an absorbing state at zero, and the trick is to avoid that state.
If you're in the top 1% of financial quants in the world, with access to the fastest sources of relevant information, can inject orders into the system as fast as any of the big players, and can access all their dark pools of liquidity, then your mechanical trading strategy may perform as well in the future as it did in the past, until others hit on how you're doing it, and copy it, diluting your returns. [edit:] In the OP's case, as a trader with all that access, then @Zach's suggestion of the R package Performance Analytics looks very promising. And I'd also suggest sub-sampling fixed-length intervals (e.g. 125-point) at random from the back-tested results, and looking at the statistics of the results: that is to say, pick a lot of dates at random from the range for which you have data: select uniformly on the range from the first date, to 124 days before the end date. For each interval, create a data point calculated as the net change in accumulated Profit & Loss between start and end dates. Then look at the mean, variance, skew, kurtosis of that set of data points. That might be useful: but only because the back-tested results were not used for calibrating the trading algorithm [end edit]
|
What methods to use for statistical prediction/forecast of trading data?
In general, in the markets, a mechanical trading system that has out-performed in the past, tends to under-perform in the future. If you're really lucky, then previous out-performance is just statisti
|
44,102
|
Is it possible to do data analysis in Open Office Calc?
|
Yes, you can do statistics in Open Office Calc:
Here is a list of statistical functions in LibreOffice Calc
Possibly out of date suggestions
There is an add on called R and Calc (page last modified in 2008; ymmv) that allows the user to call R functions from within Open Office.
Calc's data analysis tool is under development.
(page last modified in 2008)
However, the spreadsheet interface can get in the way of data analysis, and is often not the appropriate tool for the job.
If you are primarily interested in data analysis, it is worth checking out the Deducer or Rstudio interfaces to R.
Ggobi is a good tool for data visualization. (Update: also pre-2010)
|
Is it possible to do data analysis in Open Office Calc?
|
Yes, you can do statistics in Open Office Calc:
Here is a list of statistical functions in LibreOffice Calc
Possibly out of date suggestions
There is an add on called R and Calc (page last modified
|
Is it possible to do data analysis in Open Office Calc?
Yes, you can do statistics in Open Office Calc:
Here is a list of statistical functions in LibreOffice Calc
Possibly out of date suggestions
There is an add on called R and Calc (page last modified in 2008; ymmv) that allows the user to call R functions from within Open Office.
Calc's data analysis tool is under development.
(page last modified in 2008)
However, the spreadsheet interface can get in the way of data analysis, and is often not the appropriate tool for the job.
If you are primarily interested in data analysis, it is worth checking out the Deducer or Rstudio interfaces to R.
Ggobi is a good tool for data visualization. (Update: also pre-2010)
|
Is it possible to do data analysis in Open Office Calc?
Yes, you can do statistics in Open Office Calc:
Here is a list of statistical functions in LibreOffice Calc
Possibly out of date suggestions
There is an add on called R and Calc (page last modified
|
44,103
|
Is it possible to do data analysis in Open Office Calc?
|
Gnumeric
http://projects.gnome.org/gnumeric/
will do various statistical analyses. After installation they are found under Statistics in the top (File, Edit, etc.) menu.
|
Is it possible to do data analysis in Open Office Calc?
|
Gnumeric
http://projects.gnome.org/gnumeric/
will do various statistical analyses. After installation they are found under Statistics in the top (File, Edit, etc.) menu.
|
Is it possible to do data analysis in Open Office Calc?
Gnumeric
http://projects.gnome.org/gnumeric/
will do various statistical analyses. After installation they are found under Statistics in the top (File, Edit, etc.) menu.
|
Is it possible to do data analysis in Open Office Calc?
Gnumeric
http://projects.gnome.org/gnumeric/
will do various statistical analyses. After installation they are found under Statistics in the top (File, Edit, etc.) menu.
|
44,104
|
Is it possible to do data analysis in Open Office Calc?
|
First post here!
I've used this:
http://sourceforge.net/projects/ooomacros/files/OOo%20Statistics/
to do stats in openoffice (and recommended it to others as well).
I usually use R but sometimes a quick look is all you need.
best
i
|
Is it possible to do data analysis in Open Office Calc?
|
First post here!
I've used this:
http://sourceforge.net/projects/ooomacros/files/OOo%20Statistics/
to do stats in openoffice (and recommended it to others as well).
I usually use R but sometimes a qui
|
Is it possible to do data analysis in Open Office Calc?
First post here!
I've used this:
http://sourceforge.net/projects/ooomacros/files/OOo%20Statistics/
to do stats in openoffice (and recommended it to others as well).
I usually use R but sometimes a quick look is all you need.
best
i
|
Is it possible to do data analysis in Open Office Calc?
First post here!
I've used this:
http://sourceforge.net/projects/ooomacros/files/OOo%20Statistics/
to do stats in openoffice (and recommended it to others as well).
I usually use R but sometimes a qui
|
44,105
|
Is it possible to do data analysis in Open Office Calc?
|
Sofastats looks really well done, and it can import from OpenOffice files.
http://www.sofastatistics.com/
|
Is it possible to do data analysis in Open Office Calc?
|
Sofastats looks really well done, and it can import from OpenOffice files.
http://www.sofastatistics.com/
|
Is it possible to do data analysis in Open Office Calc?
Sofastats looks really well done, and it can import from OpenOffice files.
http://www.sofastatistics.com/
|
Is it possible to do data analysis in Open Office Calc?
Sofastats looks really well done, and it can import from OpenOffice files.
http://www.sofastatistics.com/
|
44,106
|
Factor dependent correlation
|
I agree with JMS advice, that the answer is totally context dependent.
But what you are looking at may also be considered a moderation effect.
In statistics, moderation occurs when
the relationship between two variables
depends on a third variable.
(quoted from wikipedia)
A moderation is statistically significant if in a multiple regression analyses the interaction of the predictor with the third variable is significant.
|
Factor dependent correlation
|
I agree with JMS advice, that the answer is totally context dependent.
But what you are looking at may also be considered a moderation effect.
In statistics, moderation occurs when
the relationshi
|
Factor dependent correlation
I agree with JMS advice, that the answer is totally context dependent.
But what you are looking at may also be considered a moderation effect.
In statistics, moderation occurs when
the relationship between two variables
depends on a third variable.
(quoted from wikipedia)
A moderation is statistically significant if in a multiple regression analyses the interaction of the predictor with the third variable is significant.
|
Factor dependent correlation
I agree with JMS advice, that the answer is totally context dependent.
But what you are looking at may also be considered a moderation effect.
In statistics, moderation occurs when
the relationshi
|
44,107
|
Factor dependent correlation
|
Are you familiar with Simpson's paradox? This would seem to be what you're observing here.
Edit: I didn't answer your question :) What exactly you should do is to some degree context dependent (Are the groups meaningful? Does this represent a problem in the study design? etc). At the very least you should report both results IMO.
|
Factor dependent correlation
|
Are you familiar with Simpson's paradox? This would seem to be what you're observing here.
Edit: I didn't answer your question :) What exactly you should do is to some degree context dependent (Are th
|
Factor dependent correlation
Are you familiar with Simpson's paradox? This would seem to be what you're observing here.
Edit: I didn't answer your question :) What exactly you should do is to some degree context dependent (Are the groups meaningful? Does this represent a problem in the study design? etc). At the very least you should report both results IMO.
|
Factor dependent correlation
Are you familiar with Simpson's paradox? This would seem to be what you're observing here.
Edit: I didn't answer your question :) What exactly you should do is to some degree context dependent (Are th
|
44,108
|
Factor dependent correlation
|
The previous comments are all good, but with group sample sizes of 5, 7, and 11, I wouldn't trust any of their correlations as far as I could throw them. You'll need to give the overall r a wide confidence interval as well. btw Nice job on the graph.
|
Factor dependent correlation
|
The previous comments are all good, but with group sample sizes of 5, 7, and 11, I wouldn't trust any of their correlations as far as I could throw them. You'll need to give the overall r a wide conf
|
Factor dependent correlation
The previous comments are all good, but with group sample sizes of 5, 7, and 11, I wouldn't trust any of their correlations as far as I could throw them. You'll need to give the overall r a wide confidence interval as well. btw Nice job on the graph.
|
Factor dependent correlation
The previous comments are all good, but with group sample sizes of 5, 7, and 11, I wouldn't trust any of their correlations as far as I could throw them. You'll need to give the overall r a wide conf
|
44,109
|
zscore function in R [closed]
|
As the zscore function you are looking for can be found in the R.basic package made by Henrik Bengtsson, which cannot be found on CRAN. To install use:
install.packages(c("R.basic"), contriburl="http://www.braju.com/R/repos/")
See this similar topic for more details.
|
zscore function in R [closed]
|
As the zscore function you are looking for can be found in the R.basic package made by Henrik Bengtsson, which cannot be found on CRAN. To install use:
install.packages(c("R.basic"), contriburl="http:
|
zscore function in R [closed]
As the zscore function you are looking for can be found in the R.basic package made by Henrik Bengtsson, which cannot be found on CRAN. To install use:
install.packages(c("R.basic"), contriburl="http://www.braju.com/R/repos/")
See this similar topic for more details.
|
zscore function in R [closed]
As the zscore function you are looking for can be found in the R.basic package made by Henrik Bengtsson, which cannot be found on CRAN. To install use:
install.packages(c("R.basic"), contriburl="http:
|
44,110
|
zscore function in R [closed]
|
Also, the base R function scale() can be used to produce z-scores. See help(scale)
|
zscore function in R [closed]
|
Also, the base R function scale() can be used to produce z-scores. See help(scale)
|
zscore function in R [closed]
Also, the base R function scale() can be used to produce z-scores. See help(scale)
|
zscore function in R [closed]
Also, the base R function scale() can be used to produce z-scores. See help(scale)
|
44,111
|
zscore function in R [closed]
|
You can use the following function to calculate the z value:
zVal <- round(qnorm(1 - (1 - prob)/2), 2)
Example, for 90%:
> zVal <- round(qnorm(1 - (1 - 0.90)/2), 2)
> 1.64
|
zscore function in R [closed]
|
You can use the following function to calculate the z value:
zVal <- round(qnorm(1 - (1 - prob)/2), 2)
Example, for 90%:
> zVal <- round(qnorm(1 - (1 - 0.90)/2), 2)
> 1.64
|
zscore function in R [closed]
You can use the following function to calculate the z value:
zVal <- round(qnorm(1 - (1 - prob)/2), 2)
Example, for 90%:
> zVal <- round(qnorm(1 - (1 - 0.90)/2), 2)
> 1.64
|
zscore function in R [closed]
You can use the following function to calculate the z value:
zVal <- round(qnorm(1 - (1 - prob)/2), 2)
Example, for 90%:
> zVal <- round(qnorm(1 - (1 - 0.90)/2), 2)
> 1.64
|
44,112
|
R resources in non-English languages
|
In german:
A short introduction to R very short, covers only the basics of R programming
http://de.wikibooks.org/wiki/GNU_R teaches the basics of R programmming in detail and also contains some examples of producing graphics and statistics.
cran.r-project.org/doc/contrib/Sawitzki-Einfuehrung.pdf a lengthy introduction into statistics with R with a smaller focus on programming.
|
R resources in non-English languages
|
In german:
A short introduction to R very short, covers only the basics of R programming
http://de.wikibooks.org/wiki/GNU_R teaches the basics of R programmming in detail and also contains some exam
|
R resources in non-English languages
In german:
A short introduction to R very short, covers only the basics of R programming
http://de.wikibooks.org/wiki/GNU_R teaches the basics of R programmming in detail and also contains some examples of producing graphics and statistics.
cran.r-project.org/doc/contrib/Sawitzki-Einfuehrung.pdf a lengthy introduction into statistics with R with a smaller focus on programming.
|
R resources in non-English languages
In german:
A short introduction to R very short, covers only the basics of R programming
http://de.wikibooks.org/wiki/GNU_R teaches the basics of R programmming in detail and also contains some exam
|
44,113
|
R resources in non-English languages
|
There doesn't appear to be much in Russian, but here is a couple of links:
http://herba.msu.ru/shipunov/software/r/r-ru.htm contains pointers to a number of Russian-language R resources;
http://voliadis.ru/taxonomy/term/18 is a blog with some R content.
|
R resources in non-English languages
|
There doesn't appear to be much in Russian, but here is a couple of links:
http://herba.msu.ru/shipunov/software/r/r-ru.htm contains pointers to a number of Russian-language R resources;
http://volia
|
R resources in non-English languages
There doesn't appear to be much in Russian, but here is a couple of links:
http://herba.msu.ru/shipunov/software/r/r-ru.htm contains pointers to a number of Russian-language R resources;
http://voliadis.ru/taxonomy/term/18 is a blog with some R content.
|
R resources in non-English languages
There doesn't appear to be much in Russian, but here is a couple of links:
http://herba.msu.ru/shipunov/software/r/r-ru.htm contains pointers to a number of Russian-language R resources;
http://volia
|
44,114
|
R resources in non-English languages
|
Some german blog entries:
http://www.schockwellenreiter.de/blog/tag/r/
and
http://markheckmann.wordpress.com/category/r-r-code/
edit: and one more:
http://wagezudenken.blogspot.com/
|
R resources in non-English languages
|
Some german blog entries:
http://www.schockwellenreiter.de/blog/tag/r/
and
http://markheckmann.wordpress.com/category/r-r-code/
edit: and one more:
http://wagezudenken.blogspot.com/
|
R resources in non-English languages
Some german blog entries:
http://www.schockwellenreiter.de/blog/tag/r/
and
http://markheckmann.wordpress.com/category/r-r-code/
edit: and one more:
http://wagezudenken.blogspot.com/
|
R resources in non-English languages
Some german blog entries:
http://www.schockwellenreiter.de/blog/tag/r/
and
http://markheckmann.wordpress.com/category/r-r-code/
edit: and one more:
http://wagezudenken.blogspot.com/
|
44,115
|
R resources in non-English languages
|
All RSS feeds I follow are in English actually, so I'll just point to tutorials available in French, or made by French researchers.
Apart from the Contributed Documentation on CRAN, I often browse the R website hosted at the bioinformatics lab in Lyon (France); it is mostly in French, but it also includes english material. I also like Philippe Besse resources (SAS + R).
|
R resources in non-English languages
|
All RSS feeds I follow are in English actually, so I'll just point to tutorials available in French, or made by French researchers.
Apart from the Contributed Documentation on CRAN, I often browse the
|
R resources in non-English languages
All RSS feeds I follow are in English actually, so I'll just point to tutorials available in French, or made by French researchers.
Apart from the Contributed Documentation on CRAN, I often browse the R website hosted at the bioinformatics lab in Lyon (France); it is mostly in French, but it also includes english material. I also like Philippe Besse resources (SAS + R).
|
R resources in non-English languages
All RSS feeds I follow are in English actually, so I'll just point to tutorials available in French, or made by French researchers.
Apart from the Contributed Documentation on CRAN, I often browse the
|
44,116
|
R resources in non-English languages
|
Here is a german blog with some posts on R:
http://blog.berndweiss.net/tag/r/
Recently started, with no posts on R yet, but focused on open data, is this blog:
http://blog.zeit.de/open-data
|
R resources in non-English languages
|
Here is a german blog with some posts on R:
http://blog.berndweiss.net/tag/r/
Recently started, with no posts on R yet, but focused on open data, is this blog:
http://blog.zeit.de/open-data
|
R resources in non-English languages
Here is a german blog with some posts on R:
http://blog.berndweiss.net/tag/r/
Recently started, with no posts on R yet, but focused on open data, is this blog:
http://blog.zeit.de/open-data
|
R resources in non-English languages
Here is a german blog with some posts on R:
http://blog.berndweiss.net/tag/r/
Recently started, with no posts on R yet, but focused on open data, is this blog:
http://blog.zeit.de/open-data
|
44,117
|
R resources in non-English languages
|
See the bottom 2 thirds of http://cran.fhcrc.org/other-docs.html (or other cran site).
|
R resources in non-English languages
|
See the bottom 2 thirds of http://cran.fhcrc.org/other-docs.html (or other cran site).
|
R resources in non-English languages
See the bottom 2 thirds of http://cran.fhcrc.org/other-docs.html (or other cran site).
|
R resources in non-English languages
See the bottom 2 thirds of http://cran.fhcrc.org/other-docs.html (or other cran site).
|
44,118
|
Why a sample of skewed normal distribution is not normal?
|
I was under the impression that if I randomly sample from a skewed normal distribution, the distribution of my sample would be normal based on central limit theorem
You are incorrect in your understanding of the central limit theorem (it is a pretty common misconception, as Dave pointed out). The CLT states that under certain conditions the limiting distribution of the sample mean is normal, not that data sampled from a non-normal population will have a normal distribution.
You can see this in action if you run a different simulation, where you simulate the sample means:
import random
import numpy as np
from scipy.stats import skewnorm, norm
import seaborn as sns
import matplotlib.pyplot as plt
skewed = skewnorm(4)
simulated_means = []
for i in range(10000):
data = skewed.rvs(100)
simulated_means.append(np.mean(data))
sns.distplot(simulated_means, fit=norm)
plt.show()
In this particular case, we see that the sample distribution of the mean is more or less normal when n=100; the normal fit is the black line. This will not always be true, since the CLT is an asymptotic result, but simulations like this help us understand how the sampling distribution from a particular population with a particular sample size might look.
|
Why a sample of skewed normal distribution is not normal?
|
I was under the impression that if I randomly sample from a skewed normal distribution, the distribution of my sample would be normal based on central limit theorem
You are incorrect in your understa
|
Why a sample of skewed normal distribution is not normal?
I was under the impression that if I randomly sample from a skewed normal distribution, the distribution of my sample would be normal based on central limit theorem
You are incorrect in your understanding of the central limit theorem (it is a pretty common misconception, as Dave pointed out). The CLT states that under certain conditions the limiting distribution of the sample mean is normal, not that data sampled from a non-normal population will have a normal distribution.
You can see this in action if you run a different simulation, where you simulate the sample means:
import random
import numpy as np
from scipy.stats import skewnorm, norm
import seaborn as sns
import matplotlib.pyplot as plt
skewed = skewnorm(4)
simulated_means = []
for i in range(10000):
data = skewed.rvs(100)
simulated_means.append(np.mean(data))
sns.distplot(simulated_means, fit=norm)
plt.show()
In this particular case, we see that the sample distribution of the mean is more or less normal when n=100; the normal fit is the black line. This will not always be true, since the CLT is an asymptotic result, but simulations like this help us understand how the sampling distribution from a particular population with a particular sample size might look.
|
Why a sample of skewed normal distribution is not normal?
I was under the impression that if I randomly sample from a skewed normal distribution, the distribution of my sample would be normal based on central limit theorem
You are incorrect in your understa
|
44,119
|
Why a sample of skewed normal distribution is not normal?
|
Consider this:
If you take as a sample the whole population (i.e. the very very large “sample”), then by some miracle your skewed population suddenly will be changed to a normal one?
|
Why a sample of skewed normal distribution is not normal?
|
Consider this:
If you take as a sample the whole population (i.e. the very very large “sample”), then by some miracle your skewed population suddenly will be changed to a normal one?
|
Why a sample of skewed normal distribution is not normal?
Consider this:
If you take as a sample the whole population (i.e. the very very large “sample”), then by some miracle your skewed population suddenly will be changed to a normal one?
|
Why a sample of skewed normal distribution is not normal?
Consider this:
If you take as a sample the whole population (i.e. the very very large “sample”), then by some miracle your skewed population suddenly will be changed to a normal one?
|
44,120
|
Deep Learning based time series forecasting
|
You can't meaningfully talk about DNNs or ARIMA being "better at time series forecasting". It depends enormously on what kind of series you are looking at: short vs. long series, many vs. few or only one related series, causal drivers or not etc.
Anyone who makes sweeping statements here is like a salesman who knows exactly what kind of car you need - without bothering to find out whether you need to drive offroad, commute two miles to work, need to move a Little League baseball team, or want to transport cattle.
As a very rough rule of thumb, classical methods perform competitively if you have few short series. DNNs may work better if you have many related series. (It depends heavily on whether the person setting them up knows what she or he is doing.)
|
Deep Learning based time series forecasting
|
You can't meaningfully talk about DNNs or ARIMA being "better at time series forecasting". It depends enormously on what kind of series you are looking at: short vs. long series, many vs. few or only
|
Deep Learning based time series forecasting
You can't meaningfully talk about DNNs or ARIMA being "better at time series forecasting". It depends enormously on what kind of series you are looking at: short vs. long series, many vs. few or only one related series, causal drivers or not etc.
Anyone who makes sweeping statements here is like a salesman who knows exactly what kind of car you need - without bothering to find out whether you need to drive offroad, commute two miles to work, need to move a Little League baseball team, or want to transport cattle.
As a very rough rule of thumb, classical methods perform competitively if you have few short series. DNNs may work better if you have many related series. (It depends heavily on whether the person setting them up knows what she or he is doing.)
|
Deep Learning based time series forecasting
You can't meaningfully talk about DNNs or ARIMA being "better at time series forecasting". It depends enormously on what kind of series you are looking at: short vs. long series, many vs. few or only
|
44,121
|
Deep Learning based time series forecasting
|
You might also consider the drivers behind 'Deep-learning-beats-all' trend you mention. Much of the hype around these techniques comes from the superiority of these methods in image recognition and natural language problems. These domains are defined by exceptionally large datasets (e.g. ImageNet > 14 million images, it's possible to find very large text corpora). So just by understanding why these methods are popular in the first place more or less answers why they are less used for time series (since time series datasets are much smaller).
As an example of how short important time series datasets can be very small consider that if you wanted to model US GDP, the Federal Reserve has quarterly data going back to 1929, which is only about 360 datapoints!
|
Deep Learning based time series forecasting
|
You might also consider the drivers behind 'Deep-learning-beats-all' trend you mention. Much of the hype around these techniques comes from the superiority of these methods in image recognition and na
|
Deep Learning based time series forecasting
You might also consider the drivers behind 'Deep-learning-beats-all' trend you mention. Much of the hype around these techniques comes from the superiority of these methods in image recognition and natural language problems. These domains are defined by exceptionally large datasets (e.g. ImageNet > 14 million images, it's possible to find very large text corpora). So just by understanding why these methods are popular in the first place more or less answers why they are less used for time series (since time series datasets are much smaller).
As an example of how short important time series datasets can be very small consider that if you wanted to model US GDP, the Federal Reserve has quarterly data going back to 1929, which is only about 360 datapoints!
|
Deep Learning based time series forecasting
You might also consider the drivers behind 'Deep-learning-beats-all' trend you mention. Much of the hype around these techniques comes from the superiority of these methods in image recognition and na
|
44,122
|
Deep Learning based time series forecasting
|
Is the reason behind the result from the fact that DNN algorithms require a large-sized data?
There are parallels to time-series and tabular data. A recent work Tabular Data: Deep Learning is Not All You Need shows similar trends that DNN does not out perform conventional models on tabular data. However, it is right that the learning capacity of DNNs are advantageous when there are very large datasets. More justice to deep learning would be to say this is an open research area and DNNs have great potential over conventional time-series models.
PS: To give link to paper Statistical and Machine Learning forecasting methods: Concerns and ways forward lead by Cypriot researcher
|
Deep Learning based time series forecasting
|
Is the reason behind the result from the fact that DNN algorithms require a large-sized data?
There are parallels to time-series and tabular data. A recent work Tabular Data: Deep Learning is Not All
|
Deep Learning based time series forecasting
Is the reason behind the result from the fact that DNN algorithms require a large-sized data?
There are parallels to time-series and tabular data. A recent work Tabular Data: Deep Learning is Not All You Need shows similar trends that DNN does not out perform conventional models on tabular data. However, it is right that the learning capacity of DNNs are advantageous when there are very large datasets. More justice to deep learning would be to say this is an open research area and DNNs have great potential over conventional time-series models.
PS: To give link to paper Statistical and Machine Learning forecasting methods: Concerns and ways forward lead by Cypriot researcher
|
Deep Learning based time series forecasting
Is the reason behind the result from the fact that DNN algorithms require a large-sized data?
There are parallels to time-series and tabular data. A recent work Tabular Data: Deep Learning is Not All
|
44,123
|
Deep Learning based time series forecasting
|
Statitical tools applied to time series forecasting are very developed and approach-oriented methods. You find many technics arima sarima sarimax var varimax vecm.... And each method had been developed for a particular situation and type of data and serie.
In the other hand DNN such as RNN, LSTM.. Are challenging models that have not been very used in this field, so haven't experienced many situation so they can been evaluated and updated in a large scale.
I had the chance to work for a system of medels, combined, for a platform that forecast time series of eco, demo, social... Indicators and it happen that sometimes LSTM get a better result.
What I can claim is that DNN are not good when it comes to random trend chocs.
|
Deep Learning based time series forecasting
|
Statitical tools applied to time series forecasting are very developed and approach-oriented methods. You find many technics arima sarima sarimax var varimax vecm.... And each method had been develope
|
Deep Learning based time series forecasting
Statitical tools applied to time series forecasting are very developed and approach-oriented methods. You find many technics arima sarima sarimax var varimax vecm.... And each method had been developed for a particular situation and type of data and serie.
In the other hand DNN such as RNN, LSTM.. Are challenging models that have not been very used in this field, so haven't experienced many situation so they can been evaluated and updated in a large scale.
I had the chance to work for a system of medels, combined, for a platform that forecast time series of eco, demo, social... Indicators and it happen that sometimes LSTM get a better result.
What I can claim is that DNN are not good when it comes to random trend chocs.
|
Deep Learning based time series forecasting
Statitical tools applied to time series forecasting are very developed and approach-oriented methods. You find many technics arima sarima sarimax var varimax vecm.... And each method had been develope
|
44,124
|
theoretical basis for logistic regression
|
There is no theoretical basis for logistic regression (in general as a choice vs. another model). Two things are arbitrary:
summing the influences of each variables, each influence being proportional to the variable (linear predictor)
the sigmoid link (logit)
The first assumption is similar to linear regression: a simple model that is very useful and often matches observations sufficiently well to make something of it.
The second assumption can't be justified either. It is similar to the assumption of normality of the noise in linear regression. Interestingly many other link functions produce very similar results: Difference between logit and probit models.
It is however interesting that logistic regression is equivalent to maximum entropy (in the case of binary/multinomial outcomes and independent observations), and that maximum entropy was stated as a principle by Jaynes in the 50s. I think people realized the two are equivalent much later (early 2000s as far as I known).
|
theoretical basis for logistic regression
|
There is no theoretical basis for logistic regression (in general as a choice vs. another model). Two things are arbitrary:
summing the influences of each variables, each influence being proportiona
|
theoretical basis for logistic regression
There is no theoretical basis for logistic regression (in general as a choice vs. another model). Two things are arbitrary:
summing the influences of each variables, each influence being proportional to the variable (linear predictor)
the sigmoid link (logit)
The first assumption is similar to linear regression: a simple model that is very useful and often matches observations sufficiently well to make something of it.
The second assumption can't be justified either. It is similar to the assumption of normality of the noise in linear regression. Interestingly many other link functions produce very similar results: Difference between logit and probit models.
It is however interesting that logistic regression is equivalent to maximum entropy (in the case of binary/multinomial outcomes and independent observations), and that maximum entropy was stated as a principle by Jaynes in the 50s. I think people realized the two are equivalent much later (early 2000s as far as I known).
|
theoretical basis for logistic regression
There is no theoretical basis for logistic regression (in general as a choice vs. another model). Two things are arbitrary:
summing the influences of each variables, each influence being proportiona
|
44,125
|
theoretical basis for logistic regression
|
No, it isn't necessary. Economists like to get elbow-deep in mathematical hypotheses of how people make decisions, hence their frequent invocation of utility theory. But logistic regression can be justified in statistical terms without reference to utility, using the idea that a unit change in a predictor relates to an additive change in the log odds of a response.
|
theoretical basis for logistic regression
|
No, it isn't necessary. Economists like to get elbow-deep in mathematical hypotheses of how people make decisions, hence their frequent invocation of utility theory. But logistic regression can be jus
|
theoretical basis for logistic regression
No, it isn't necessary. Economists like to get elbow-deep in mathematical hypotheses of how people make decisions, hence their frequent invocation of utility theory. But logistic regression can be justified in statistical terms without reference to utility, using the idea that a unit change in a predictor relates to an additive change in the log odds of a response.
|
theoretical basis for logistic regression
No, it isn't necessary. Economists like to get elbow-deep in mathematical hypotheses of how people make decisions, hence their frequent invocation of utility theory. But logistic regression can be jus
|
44,126
|
theoretical basis for logistic regression
|
You get an impression that economists do it because that's what they are forced to write in order to get published in microeconomics. Pure empirical studies are hard to publish.
However, this is changing, and not all economists do this. For instance, take a look at this work: "Analyzing the Risk of Mortgage Default." They use multinomial logit, and there's no utility function mentioned anywhere in the paper. And this is not even macroeconomics, where they don't feel pressured to shove the utility function in every paper
|
theoretical basis for logistic regression
|
You get an impression that economists do it because that's what they are forced to write in order to get published in microeconomics. Pure empirical studies are hard to publish.
However, this is chang
|
theoretical basis for logistic regression
You get an impression that economists do it because that's what they are forced to write in order to get published in microeconomics. Pure empirical studies are hard to publish.
However, this is changing, and not all economists do this. For instance, take a look at this work: "Analyzing the Risk of Mortgage Default." They use multinomial logit, and there's no utility function mentioned anywhere in the paper. And this is not even macroeconomics, where they don't feel pressured to shove the utility function in every paper
|
theoretical basis for logistic regression
You get an impression that economists do it because that's what they are forced to write in order to get published in microeconomics. Pure empirical studies are hard to publish.
However, this is chang
|
44,127
|
theoretical basis for logistic regression
|
Other people have answered your question, let me explain a bit more the philosophy behind the different justfications for logit models.
The utility model used in economics is based on the grand idea to link general preference orderings over outcomes with the ordering of real numbers.
Less abstractly, what economists have tried to do is to show when any preference over some possible outcomes can be represented by functions that give a "maximum choice", and when this is not possible.
This fits very natural with logistic regression when there are only two choices, 0 and 1, and also very well with multinomial models where there are more choices.
Given distributional assumptions, the logistic regression therefore "arises" naturally out of a microfounded and very general model of human behavior. This is nice for economists, because many results they are after abstractly require the existence of such preferences to make sense more than just heuristically. The same is true for other social sciences that rely on choice, but their focus is often different.
One can posit a discrete choice model either as a utility model, or a latent variable model. The latent variable model (where $y=1$ if some $y*>t$) is also basically a choice model, only that it does not specify why this decision rule comes about.
Sometimes we are not interested in modeling this why. For example, we may simply not care, because some otherwise stable but complicated mechanism is behind it. It may also be the case that there is no actual entity making a decision, it is in a sense a purely statistical affair.
It would then be rather contrived to think about some hypothetic preference orderings by some non-entity.
So to answer your question: The utility model is not necessary at all. It depends on your research question. Is there an entity making a decision? If so, are you trying to learn something about this decisionmaking? If yes, then all approaches will sooner or later lead to a utility model, simply because you need to find stable or logical "preferences" in your research.
In other applications, utility is not necessary at all (especially outside of social sciences this may be the case, say a mechanical model) and it would be unnecessary and even harmful to argue with the utility model.
|
theoretical basis for logistic regression
|
Other people have answered your question, let me explain a bit more the philosophy behind the different justfications for logit models.
The utility model used in economics is based on the grand idea t
|
theoretical basis for logistic regression
Other people have answered your question, let me explain a bit more the philosophy behind the different justfications for logit models.
The utility model used in economics is based on the grand idea to link general preference orderings over outcomes with the ordering of real numbers.
Less abstractly, what economists have tried to do is to show when any preference over some possible outcomes can be represented by functions that give a "maximum choice", and when this is not possible.
This fits very natural with logistic regression when there are only two choices, 0 and 1, and also very well with multinomial models where there are more choices.
Given distributional assumptions, the logistic regression therefore "arises" naturally out of a microfounded and very general model of human behavior. This is nice for economists, because many results they are after abstractly require the existence of such preferences to make sense more than just heuristically. The same is true for other social sciences that rely on choice, but their focus is often different.
One can posit a discrete choice model either as a utility model, or a latent variable model. The latent variable model (where $y=1$ if some $y*>t$) is also basically a choice model, only that it does not specify why this decision rule comes about.
Sometimes we are not interested in modeling this why. For example, we may simply not care, because some otherwise stable but complicated mechanism is behind it. It may also be the case that there is no actual entity making a decision, it is in a sense a purely statistical affair.
It would then be rather contrived to think about some hypothetic preference orderings by some non-entity.
So to answer your question: The utility model is not necessary at all. It depends on your research question. Is there an entity making a decision? If so, are you trying to learn something about this decisionmaking? If yes, then all approaches will sooner or later lead to a utility model, simply because you need to find stable or logical "preferences" in your research.
In other applications, utility is not necessary at all (especially outside of social sciences this may be the case, say a mechanical model) and it would be unnecessary and even harmful to argue with the utility model.
|
theoretical basis for logistic regression
Other people have answered your question, let me explain a bit more the philosophy behind the different justfications for logit models.
The utility model used in economics is based on the grand idea t
|
44,128
|
theoretical basis for logistic regression
|
I'm not an economist nor do I know much utility theory, but I actually think there is some theoretical justification for logistic regression - at least at a high level. In real life, aren't decisions closer to existing on a scale like 0-100% rather than 0/1 binary? The ability to get 'probabilities' out of a logistic model makes it more attractive than some other classification methods. Of course, the fact that linearly separability frustrates logistic regression poses a problem for certain classes of theoretical justification.
|
theoretical basis for logistic regression
|
I'm not an economist nor do I know much utility theory, but I actually think there is some theoretical justification for logistic regression - at least at a high level. In real life, aren't decisions
|
theoretical basis for logistic regression
I'm not an economist nor do I know much utility theory, but I actually think there is some theoretical justification for logistic regression - at least at a high level. In real life, aren't decisions closer to existing on a scale like 0-100% rather than 0/1 binary? The ability to get 'probabilities' out of a logistic model makes it more attractive than some other classification methods. Of course, the fact that linearly separability frustrates logistic regression poses a problem for certain classes of theoretical justification.
|
theoretical basis for logistic regression
I'm not an economist nor do I know much utility theory, but I actually think there is some theoretical justification for logistic regression - at least at a high level. In real life, aren't decisions
|
44,129
|
Under what additional conditions does independence follow from zero correlation?
|
The statement that you are asking about has two parts:
If $X$ and $Y$ are independent, then $X$ and $Y$ are uncorrelated.
If $X$ and $Y$ are uncorrelated, then $X$ and $Y$ are independent.
Statement 1 is always true and imposes no additional constraints on $X$ and $Y$ other than what already has been assumed, viz. that they are independent random variables. Statement 2 does not hold in general, but it does hold if we constrain $X$ and $Y$ to be jointly Gaussian random variables. That is,
2'. If jointly Gaussian random variables $X$ and $Y$ are uncorrelated, then $X$ and $Y$ are independent.
is a true statement, and so
Jointly Gaussian random variables $X$ and $Y$ are uncorrelated if and only if
they are independent
is a true statement but
"Random variables $X$ and $Y$ are uncorrelated if and only if they are independent"
does not hold in general. Nor is
"Gaussian random variables $X$ and $Y$ are uncorrelated if and only if they are independent"
a true statement. (Note that in contrast to 2'. the word jointly is missing from the statement). For example, suppose that $X\sim N(0,1)$ and $Z$, independent of $X$ is a Bernoulli random variable with parameter $\frac 12$. Set $Y = (-1)^ZX = \pm X$ and note that $Y \sim N(0,1)$, just like $X$. But,
$$E[XY] = E[(-1)^Z X^2] = E[(-1)^Z]E[X^2] = 0 = E[X]E[Y]$$
showing that $X$ and $Y$ are (marginally) Gaussian random variables that
are uncorrelated. That they are not independent is easily see because conditioned on the event that $X = x_0$, $Y$ takes on values $x_0$ and $-x_0$ and is thus a discrete random variable instead of continuing to enjoy the standard Gaussian density as it would have if only $X$ and $Y$ were independent random variables. Note that $X$ and $Y$ do not have a jointly Gaussian density.
Finally, if $X$ and $Y$ are Bernoulli random variables or more generally, discrete random variables that take on only two different values, then the statement
Bernoulli random variables (more generally, dichotomous random variables) $X$ and $Y$ are uncorrelated if and only if they are independent
is a true statement. See this question and its answers for some details.
|
Under what additional conditions does independence follow from zero correlation?
|
The statement that you are asking about has two parts:
If $X$ and $Y$ are independent, then $X$ and $Y$ are uncorrelated.
If $X$ and $Y$ are uncorrelated, then $X$ and $Y$ are independent.
Statement
|
Under what additional conditions does independence follow from zero correlation?
The statement that you are asking about has two parts:
If $X$ and $Y$ are independent, then $X$ and $Y$ are uncorrelated.
If $X$ and $Y$ are uncorrelated, then $X$ and $Y$ are independent.
Statement 1 is always true and imposes no additional constraints on $X$ and $Y$ other than what already has been assumed, viz. that they are independent random variables. Statement 2 does not hold in general, but it does hold if we constrain $X$ and $Y$ to be jointly Gaussian random variables. That is,
2'. If jointly Gaussian random variables $X$ and $Y$ are uncorrelated, then $X$ and $Y$ are independent.
is a true statement, and so
Jointly Gaussian random variables $X$ and $Y$ are uncorrelated if and only if
they are independent
is a true statement but
"Random variables $X$ and $Y$ are uncorrelated if and only if they are independent"
does not hold in general. Nor is
"Gaussian random variables $X$ and $Y$ are uncorrelated if and only if they are independent"
a true statement. (Note that in contrast to 2'. the word jointly is missing from the statement). For example, suppose that $X\sim N(0,1)$ and $Z$, independent of $X$ is a Bernoulli random variable with parameter $\frac 12$. Set $Y = (-1)^ZX = \pm X$ and note that $Y \sim N(0,1)$, just like $X$. But,
$$E[XY] = E[(-1)^Z X^2] = E[(-1)^Z]E[X^2] = 0 = E[X]E[Y]$$
showing that $X$ and $Y$ are (marginally) Gaussian random variables that
are uncorrelated. That they are not independent is easily see because conditioned on the event that $X = x_0$, $Y$ takes on values $x_0$ and $-x_0$ and is thus a discrete random variable instead of continuing to enjoy the standard Gaussian density as it would have if only $X$ and $Y$ were independent random variables. Note that $X$ and $Y$ do not have a jointly Gaussian density.
Finally, if $X$ and $Y$ are Bernoulli random variables or more generally, discrete random variables that take on only two different values, then the statement
Bernoulli random variables (more generally, dichotomous random variables) $X$ and $Y$ are uncorrelated if and only if they are independent
is a true statement. See this question and its answers for some details.
|
Under what additional conditions does independence follow from zero correlation?
The statement that you are asking about has two parts:
If $X$ and $Y$ are independent, then $X$ and $Y$ are uncorrelated.
If $X$ and $Y$ are uncorrelated, then $X$ and $Y$ are independent.
Statement
|
44,130
|
Under what additional conditions does independence follow from zero correlation?
|
For a joint distribution function (CDF) constructed as follows
$$H_{X,Y}(x,y)=F_X(x)G_Y(y)\left[1+\alpha\big(1-F_X(x)\big)\big(1-G_Y(y)\big)\right],\;\;\; \alpha >1$$
where $F_X(x)$ and $G_Y(y)$ are any two marginal CDF's,
uncorrelatedness (zero covariance) is equivalent to independence.
This is the "Farlie-Gumbel-Morgenstern" family of joint distributions. For an analysis of the correlation structure, see
Schucany, W. R., Parr, W. C., & Boyer, J. E. (1978). Correlation structure in farlie-gumbel-morgenstern distributions. Biometrika, 65(3), 650-653.
|
Under what additional conditions does independence follow from zero correlation?
|
For a joint distribution function (CDF) constructed as follows
$$H_{X,Y}(x,y)=F_X(x)G_Y(y)\left[1+\alpha\big(1-F_X(x)\big)\big(1-G_Y(y)\big)\right],\;\;\; \alpha >1$$
where $F_X(x)$ and $G_Y(y)$ are a
|
Under what additional conditions does independence follow from zero correlation?
For a joint distribution function (CDF) constructed as follows
$$H_{X,Y}(x,y)=F_X(x)G_Y(y)\left[1+\alpha\big(1-F_X(x)\big)\big(1-G_Y(y)\big)\right],\;\;\; \alpha >1$$
where $F_X(x)$ and $G_Y(y)$ are any two marginal CDF's,
uncorrelatedness (zero covariance) is equivalent to independence.
This is the "Farlie-Gumbel-Morgenstern" family of joint distributions. For an analysis of the correlation structure, see
Schucany, W. R., Parr, W. C., & Boyer, J. E. (1978). Correlation structure in farlie-gumbel-morgenstern distributions. Biometrika, 65(3), 650-653.
|
Under what additional conditions does independence follow from zero correlation?
For a joint distribution function (CDF) constructed as follows
$$H_{X,Y}(x,y)=F_X(x)G_Y(y)\left[1+\alpha\big(1-F_X(x)\big)\big(1-G_Y(y)\big)\right],\;\;\; \alpha >1$$
where $F_X(x)$ and $G_Y(y)$ are a
|
44,131
|
Under what additional conditions does independence follow from zero correlation?
|
The result is only guaranteed to hold when X and Y form a bivariate normal distribution. You will find this in most multivariate analysis texts as well as on some threads on this site.
|
Under what additional conditions does independence follow from zero correlation?
|
The result is only guaranteed to hold when X and Y form a bivariate normal distribution. You will find this in most multivariate analysis texts as well as on some threads on this site.
|
Under what additional conditions does independence follow from zero correlation?
The result is only guaranteed to hold when X and Y form a bivariate normal distribution. You will find this in most multivariate analysis texts as well as on some threads on this site.
|
Under what additional conditions does independence follow from zero correlation?
The result is only guaranteed to hold when X and Y form a bivariate normal distribution. You will find this in most multivariate analysis texts as well as on some threads on this site.
|
44,132
|
Proving that $x^TAx = tr(xx^TA)$? [closed]
|
A well-known property of traces (see Matrix Cookbook, 1.1 (16)) is that for any $A, B, C$, $\mbox{tr}(ABC) = \mbox{tr}(BCA)$.
Applying this to your case gives $\mbox{tr}(x x^T A) = \mbox{tr}(x^T A x)$. Note that the expression in the trace of the right hand side is a scalar. The trace of a scalar is the scalar itself.
|
Proving that $x^TAx = tr(xx^TA)$? [closed]
|
A well-known property of traces (see Matrix Cookbook, 1.1 (16)) is that for any $A, B, C$, $\mbox{tr}(ABC) = \mbox{tr}(BCA)$.
Applying this to your case gives $\mbox{tr}(x x^T A) = \mbox{tr}(x^T A x)$
|
Proving that $x^TAx = tr(xx^TA)$? [closed]
A well-known property of traces (see Matrix Cookbook, 1.1 (16)) is that for any $A, B, C$, $\mbox{tr}(ABC) = \mbox{tr}(BCA)$.
Applying this to your case gives $\mbox{tr}(x x^T A) = \mbox{tr}(x^T A x)$. Note that the expression in the trace of the right hand side is a scalar. The trace of a scalar is the scalar itself.
|
Proving that $x^TAx = tr(xx^TA)$? [closed]
A well-known property of traces (see Matrix Cookbook, 1.1 (16)) is that for any $A, B, C$, $\mbox{tr}(ABC) = \mbox{tr}(BCA)$.
Applying this to your case gives $\mbox{tr}(x x^T A) = \mbox{tr}(x^T A x)$
|
44,133
|
Proving that $x^TAx = tr(xx^TA)$? [closed]
|
Some guidance in the form of an outline of the steps
Note that $x^TAx$ is a scalar.
Use what you know about the trace and scalars to convert it to a trace.
Use properties of the trace to convert it to what you need.
|
Proving that $x^TAx = tr(xx^TA)$? [closed]
|
Some guidance in the form of an outline of the steps
Note that $x^TAx$ is a scalar.
Use what you know about the trace and scalars to convert it to a trace.
Use properties of the trace to convert it t
|
Proving that $x^TAx = tr(xx^TA)$? [closed]
Some guidance in the form of an outline of the steps
Note that $x^TAx$ is a scalar.
Use what you know about the trace and scalars to convert it to a trace.
Use properties of the trace to convert it to what you need.
|
Proving that $x^TAx = tr(xx^TA)$? [closed]
Some guidance in the form of an outline of the steps
Note that $x^TAx$ is a scalar.
Use what you know about the trace and scalars to convert it to a trace.
Use properties of the trace to convert it t
|
44,134
|
Proving that $x^TAx = tr(xx^TA)$? [closed]
|
Given $\mathrm a, \mathrm b \in \mathbb R^n$,
$$\mbox{tr} ( \, \mathrm a \mathrm b^\top ) = a_1 b_1 + a_2 b_2 + \cdots + a_n b_n = \mathrm a^\top \mathrm b$$
Thus,
$$\mbox{tr} (\mathrm x \mathrm x^\top \mathrm A) = \mbox{tr} (\mathrm x (\mathrm A^\top \mathrm x)^\top ) = \mathrm x^\top \mathrm A^\top \mathrm x = \mathrm x^\top \mathrm A \, \mathrm x$$
|
Proving that $x^TAx = tr(xx^TA)$? [closed]
|
Given $\mathrm a, \mathrm b \in \mathbb R^n$,
$$\mbox{tr} ( \, \mathrm a \mathrm b^\top ) = a_1 b_1 + a_2 b_2 + \cdots + a_n b_n = \mathrm a^\top \mathrm b$$
Thus,
$$\mbox{tr} (\mathrm x \mathrm x^\to
|
Proving that $x^TAx = tr(xx^TA)$? [closed]
Given $\mathrm a, \mathrm b \in \mathbb R^n$,
$$\mbox{tr} ( \, \mathrm a \mathrm b^\top ) = a_1 b_1 + a_2 b_2 + \cdots + a_n b_n = \mathrm a^\top \mathrm b$$
Thus,
$$\mbox{tr} (\mathrm x \mathrm x^\top \mathrm A) = \mbox{tr} (\mathrm x (\mathrm A^\top \mathrm x)^\top ) = \mathrm x^\top \mathrm A^\top \mathrm x = \mathrm x^\top \mathrm A \, \mathrm x$$
|
Proving that $x^TAx = tr(xx^TA)$? [closed]
Given $\mathrm a, \mathrm b \in \mathbb R^n$,
$$\mbox{tr} ( \, \mathrm a \mathrm b^\top ) = a_1 b_1 + a_2 b_2 + \cdots + a_n b_n = \mathrm a^\top \mathrm b$$
Thus,
$$\mbox{tr} (\mathrm x \mathrm x^\to
|
44,135
|
Linear regression - is a model "useless" if $R^2$ is very small?
|
Although $R^{2} < 0.01$ is not usually very helpful, the value of a model has to also be judged by (1) the difficulty of the task and (2) whether one hopes to learn tendencies vs. predict responses for individual subjects. Some tasks, such as predicting how may days a patient will live, are very difficult and low $R^{2}$ are not only the norm but are associated with still very useful models. Concerning tendencies, a clinical trial in which treatment B is associated with better patient responses than treatment A may have only a tiny proportion of variation of $Y$ explained by treatment and known covariates yet the tendency dictates that it is better to give treatment B to new patients, all other things being equal.
Note that in the vast majority of cases the bootstrap is run using samples of size $N$ with replacement from a sample of size $N$. Instead of traditional robust regression and bootstrapping I'd recommend one of the families of cumulative probability-based ordinal response models (e.g., proportional odds model).
|
Linear regression - is a model "useless" if $R^2$ is very small?
|
Although $R^{2} < 0.01$ is not usually very helpful, the value of a model has to also be judged by (1) the difficulty of the task and (2) whether one hopes to learn tendencies vs. predict responses fo
|
Linear regression - is a model "useless" if $R^2$ is very small?
Although $R^{2} < 0.01$ is not usually very helpful, the value of a model has to also be judged by (1) the difficulty of the task and (2) whether one hopes to learn tendencies vs. predict responses for individual subjects. Some tasks, such as predicting how may days a patient will live, are very difficult and low $R^{2}$ are not only the norm but are associated with still very useful models. Concerning tendencies, a clinical trial in which treatment B is associated with better patient responses than treatment A may have only a tiny proportion of variation of $Y$ explained by treatment and known covariates yet the tendency dictates that it is better to give treatment B to new patients, all other things being equal.
Note that in the vast majority of cases the bootstrap is run using samples of size $N$ with replacement from a sample of size $N$. Instead of traditional robust regression and bootstrapping I'd recommend one of the families of cumulative probability-based ordinal response models (e.g., proportional odds model).
|
Linear regression - is a model "useless" if $R^2$ is very small?
Although $R^{2} < 0.01$ is not usually very helpful, the value of a model has to also be judged by (1) the difficulty of the task and (2) whether one hopes to learn tendencies vs. predict responses fo
|
44,136
|
Linear regression - is a model "useless" if $R^2$ is very small?
|
Despite traditional negative attitude toward statistical models with low $R^2$, I would like to make two points: 1) "low" is a relative term - one model with a lower $R^2$ could be better (have better explanatory power or parsimony) and more useful (better reflect reality) than others with a higher $R^2$ values. Having said that, a model with the $R^2$ value of 0.7% is most likely not of too much use.
Upon encountering a statistical model with a low $R^2$, it is recommended to use some or all of the following approaches (http://people.duke.edu/~rnau/rsquared.htm):
Define model's variables a priori (design of experiment or well-defined hypotheses);
Additionally clean data, if possible (outliers, inconsistencies, ambiguous data);
Make sure that estimates are (at least jointly) significant (increase sample size, if needed and possible, particularly if correlations are weak);
Perform cross-validation (out-of-sample testing, as mentioned in some comments above).
NOTE: Just before posting this answer, I've discovered that you reformulated your question. Nevertheless, I decided to post it with the hope that it might be useful to you or other people.
|
Linear regression - is a model "useless" if $R^2$ is very small?
|
Despite traditional negative attitude toward statistical models with low $R^2$, I would like to make two points: 1) "low" is a relative term - one model with a lower $R^2$ could be better (have better
|
Linear regression - is a model "useless" if $R^2$ is very small?
Despite traditional negative attitude toward statistical models with low $R^2$, I would like to make two points: 1) "low" is a relative term - one model with a lower $R^2$ could be better (have better explanatory power or parsimony) and more useful (better reflect reality) than others with a higher $R^2$ values. Having said that, a model with the $R^2$ value of 0.7% is most likely not of too much use.
Upon encountering a statistical model with a low $R^2$, it is recommended to use some or all of the following approaches (http://people.duke.edu/~rnau/rsquared.htm):
Define model's variables a priori (design of experiment or well-defined hypotheses);
Additionally clean data, if possible (outliers, inconsistencies, ambiguous data);
Make sure that estimates are (at least jointly) significant (increase sample size, if needed and possible, particularly if correlations are weak);
Perform cross-validation (out-of-sample testing, as mentioned in some comments above).
NOTE: Just before posting this answer, I've discovered that you reformulated your question. Nevertheless, I decided to post it with the hope that it might be useful to you or other people.
|
Linear regression - is a model "useless" if $R^2$ is very small?
Despite traditional negative attitude toward statistical models with low $R^2$, I would like to make two points: 1) "low" is a relative term - one model with a lower $R^2$ could be better (have better
|
44,137
|
Linear regression - is a model "useless" if $R^2$ is very small?
|
If your model is correctly specified and the appropriate conditions for your inference method are satisfied (e.g. i.i.d. Gaussian errors if you want to use a t-test), then you should be able to achieve your nominal type I error rate, regardless of n and regardless of $R^2$. (Though as a separate issue, a large sample size will bring down your Type II error rate by increasing power, so it may be worthwhile reducing your significance level $\alpha$ to bring down your Type I error rate too; the cost of an increased Type II error rate may be worth paying now you have more power to play with. If you were to do this, your p-value may no longer look quite so impressive!)
In other words: there's no need to be more suspicious of a significant result just because the $R^2$ is low, and it isn't true that "any variable" will be significant just because the sample size is large. If the variable does not actually influence your response variable once other variables are taken into account, then if we take the 5% level as significant, the variable will only have a 5% chance of being (incorrectly) deemed significant even if your sample size is in the trillions. But remember that's subject to the conditions I mentioned earlier. Moreover, a variable which only has a very weak relationship with the dependent variable (the true slope $\beta$ is close to, but not exactly, zero) is much more likely to be detected as statistically significant in a large sample because of the increased power. This is where the difference between "statistical significance" and "practical significance" is important. Looking at the confidence interval for the slope you may find the variable will only have a negligible impact on predictions, even if it's on the side of the confidence interval furthest from zero. This is a feature of large sample sizes, not a bug - the larger the sample size, the better you understand the relationships of your variables, even the hard-to-detect weak relationships.
On the other hand, having a high $R^2$ does not mean you are safe from detecting a spurious relationship that results in poor out-of-sample performance. A situation like omitted-variable bias can strike regardless of whether your $R^2$ is high or low: if you misspecify your model, and one of the variables you include in the model is correlated with an omitted variable (one that you may not even have measured) then its coefficient estimate will be biased. It might be that it should have no influence on your dependent variable (the true $\beta$ is zero) but you may find it appears as significantly different from zero. If its correlation with the omitted variable is very weak, then this spurious significance is unlikely to occur unless your sample size is quite large. But this isn't a reason to prefer smaller sample sizes, and there's nothing special to worry about in the context of a low $R^2$. A quick demonstration by simulation in R that you can find a spurious relationship even with high $R^2$:
require(MASS) # for multivariate normal simulation
set.seed(123)
n <- 10000
X <- mvrnorm(n=n, mu=c(10, 10), Sigma=matrix(c(1,0.9,0.9,1), nrow=2))
xomitted <- X[,1]
xspurious <- X[,2] # correlated with xomitted, rho=0.9
y <- 3*xomitted + rnorm(n=n, mean=0, sd=1) # true model with noise sd=1
ovb.lm <- lm(y ~ xspurious)
summary(ovb.lm) # xspurious should have coefficient 0 but is highly sig
The output from the regression shows a significant coefficient on xomitted even though which the true slope is zero. The high $R^2$ was no guarantor of a non-spurious relationship.
Coefficients:
Estimate Std. Error t value Pr(>|t|)
(Intercept) 2.90353 0.16600 17.49 <2e-16 ***
xspurious 2.71003 0.01652 164.00 <2e-16 ***
---
Signif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1
Residual standard error: 1.653 on 9998 degrees of freedom
Multiple R-squared: 0.729, Adjusted R-squared: 0.729
F-statistic: 2.689e+04 on 1 and 9998 DF, p-value: < 2.2e-16
If you are dealing with an experimental situation where all relevant variables are measured or controlled and you may have clear theoretical grounds for the structure of your model, then this might all fade as a concern somewhat. In an experiment we may be able to hold unmeasured variables constant, or randomize them (e.g. allocations in a clinical trial) - this will eliminate the correlation between the omitted and observed variables. The problem can be more acute in observational data, where there can be a tangle of correlations between the things we can measure and - possibly more important - unobservable ones, and in fields like social sciences it may be impossible to justify a particular model specification a priori from theory (particularly things like which power a variable should appear to).
Finally, a more general statement on whether your model is "useless". Obviously with an $R^2$ below 1% you are not going to get good predictive performance. But if we are modelling a noisy process, or one with many factors but few we can measure, then good predictive performance is too much to hope for. It can still be useful to know that two variables aren't particularly related - in general we want the 95% confidence interval for our regression coefficients to be very narrow (indicating less uncertainty about the slope, for which purpose we desire a large sample size), and if that happens to be close to zero then we have learned the useful fact that we don't expect changes to that variable to have much influence on our response variable. But if the response variable is important to us (Frank Harrell's medical example is a good one, another might be the "marginal gains" theory in sport) then even ways to weakly influence it might be important to us. If your main concern is out-of-sample performance, then you should probably be paying close attention to the model specification.
|
Linear regression - is a model "useless" if $R^2$ is very small?
|
If your model is correctly specified and the appropriate conditions for your inference method are satisfied (e.g. i.i.d. Gaussian errors if you want to use a t-test), then you should be able to achiev
|
Linear regression - is a model "useless" if $R^2$ is very small?
If your model is correctly specified and the appropriate conditions for your inference method are satisfied (e.g. i.i.d. Gaussian errors if you want to use a t-test), then you should be able to achieve your nominal type I error rate, regardless of n and regardless of $R^2$. (Though as a separate issue, a large sample size will bring down your Type II error rate by increasing power, so it may be worthwhile reducing your significance level $\alpha$ to bring down your Type I error rate too; the cost of an increased Type II error rate may be worth paying now you have more power to play with. If you were to do this, your p-value may no longer look quite so impressive!)
In other words: there's no need to be more suspicious of a significant result just because the $R^2$ is low, and it isn't true that "any variable" will be significant just because the sample size is large. If the variable does not actually influence your response variable once other variables are taken into account, then if we take the 5% level as significant, the variable will only have a 5% chance of being (incorrectly) deemed significant even if your sample size is in the trillions. But remember that's subject to the conditions I mentioned earlier. Moreover, a variable which only has a very weak relationship with the dependent variable (the true slope $\beta$ is close to, but not exactly, zero) is much more likely to be detected as statistically significant in a large sample because of the increased power. This is where the difference between "statistical significance" and "practical significance" is important. Looking at the confidence interval for the slope you may find the variable will only have a negligible impact on predictions, even if it's on the side of the confidence interval furthest from zero. This is a feature of large sample sizes, not a bug - the larger the sample size, the better you understand the relationships of your variables, even the hard-to-detect weak relationships.
On the other hand, having a high $R^2$ does not mean you are safe from detecting a spurious relationship that results in poor out-of-sample performance. A situation like omitted-variable bias can strike regardless of whether your $R^2$ is high or low: if you misspecify your model, and one of the variables you include in the model is correlated with an omitted variable (one that you may not even have measured) then its coefficient estimate will be biased. It might be that it should have no influence on your dependent variable (the true $\beta$ is zero) but you may find it appears as significantly different from zero. If its correlation with the omitted variable is very weak, then this spurious significance is unlikely to occur unless your sample size is quite large. But this isn't a reason to prefer smaller sample sizes, and there's nothing special to worry about in the context of a low $R^2$. A quick demonstration by simulation in R that you can find a spurious relationship even with high $R^2$:
require(MASS) # for multivariate normal simulation
set.seed(123)
n <- 10000
X <- mvrnorm(n=n, mu=c(10, 10), Sigma=matrix(c(1,0.9,0.9,1), nrow=2))
xomitted <- X[,1]
xspurious <- X[,2] # correlated with xomitted, rho=0.9
y <- 3*xomitted + rnorm(n=n, mean=0, sd=1) # true model with noise sd=1
ovb.lm <- lm(y ~ xspurious)
summary(ovb.lm) # xspurious should have coefficient 0 but is highly sig
The output from the regression shows a significant coefficient on xomitted even though which the true slope is zero. The high $R^2$ was no guarantor of a non-spurious relationship.
Coefficients:
Estimate Std. Error t value Pr(>|t|)
(Intercept) 2.90353 0.16600 17.49 <2e-16 ***
xspurious 2.71003 0.01652 164.00 <2e-16 ***
---
Signif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1
Residual standard error: 1.653 on 9998 degrees of freedom
Multiple R-squared: 0.729, Adjusted R-squared: 0.729
F-statistic: 2.689e+04 on 1 and 9998 DF, p-value: < 2.2e-16
If you are dealing with an experimental situation where all relevant variables are measured or controlled and you may have clear theoretical grounds for the structure of your model, then this might all fade as a concern somewhat. In an experiment we may be able to hold unmeasured variables constant, or randomize them (e.g. allocations in a clinical trial) - this will eliminate the correlation between the omitted and observed variables. The problem can be more acute in observational data, where there can be a tangle of correlations between the things we can measure and - possibly more important - unobservable ones, and in fields like social sciences it may be impossible to justify a particular model specification a priori from theory (particularly things like which power a variable should appear to).
Finally, a more general statement on whether your model is "useless". Obviously with an $R^2$ below 1% you are not going to get good predictive performance. But if we are modelling a noisy process, or one with many factors but few we can measure, then good predictive performance is too much to hope for. It can still be useful to know that two variables aren't particularly related - in general we want the 95% confidence interval for our regression coefficients to be very narrow (indicating less uncertainty about the slope, for which purpose we desire a large sample size), and if that happens to be close to zero then we have learned the useful fact that we don't expect changes to that variable to have much influence on our response variable. But if the response variable is important to us (Frank Harrell's medical example is a good one, another might be the "marginal gains" theory in sport) then even ways to weakly influence it might be important to us. If your main concern is out-of-sample performance, then you should probably be paying close attention to the model specification.
|
Linear regression - is a model "useless" if $R^2$ is very small?
If your model is correctly specified and the appropriate conditions for your inference method are satisfied (e.g. i.i.d. Gaussian errors if you want to use a t-test), then you should be able to achiev
|
44,138
|
Linear regression - is a model "useless" if $R^2$ is very small?
|
A model is useful if it allows you to better understand what is happening with your data/theory and if it is correctly computed. In some cases, when the criterion variable is determined by a huge number of causes, getting high $R^2$ is very difficult.
|
Linear regression - is a model "useless" if $R^2$ is very small?
|
A model is useful if it allows you to better understand what is happening with your data/theory and if it is correctly computed. In some cases, when the criterion variable is determined by a huge numb
|
Linear regression - is a model "useless" if $R^2$ is very small?
A model is useful if it allows you to better understand what is happening with your data/theory and if it is correctly computed. In some cases, when the criterion variable is determined by a huge number of causes, getting high $R^2$ is very difficult.
|
Linear regression - is a model "useless" if $R^2$ is very small?
A model is useful if it allows you to better understand what is happening with your data/theory and if it is correctly computed. In some cases, when the criterion variable is determined by a huge numb
|
44,139
|
Naive Bayes: Imbalanced Dataset in Real-time Scenario
|
To create a good model, the model has to be built on training data which is of the same "structure" as the data the model will applied later on. This is the one boring assumption which underlies all classification models.
So by using an balanced data set meanwhile the real world is not balanced, you have already introduced a bias. While there are cases where this is not a problem (imagine perfectly separable (non-linear) classes, a model built on a balanced data set containing all border-relevant points will be still working perfectly on a skewed sample), classifying documents is often a game of probabilities and hence class skew is more problematic.
My suggestions:
Built the model on the imbalanced set with the same proportions as in production. If you have to sample for this, then perform multiple runs across different samples during validation to improve generalization power.
The "bias" towards the negative class in an imbalanced set originates from the-best-guess-is-majority-class-if-everything-else-is-equal, something which Naive Bayes is sensitive to (especially when a lot of (irrelevant) features are involved). Use a different classifier which can capture inter-feature/word-dependencies to reduce this. I'd try Gradient Boosting with trees as described in chapter 10 "Boosting and Additive Trees" of The elements of statistical learning.
You are currently using "plain precision / recall" as metric. Based on your productions requirements, estimate whether a false positive is equally bad as a false negative and adjust the metric accordingly.
|
Naive Bayes: Imbalanced Dataset in Real-time Scenario
|
To create a good model, the model has to be built on training data which is of the same "structure" as the data the model will applied later on. This is the one boring assumption which underlies all c
|
Naive Bayes: Imbalanced Dataset in Real-time Scenario
To create a good model, the model has to be built on training data which is of the same "structure" as the data the model will applied later on. This is the one boring assumption which underlies all classification models.
So by using an balanced data set meanwhile the real world is not balanced, you have already introduced a bias. While there are cases where this is not a problem (imagine perfectly separable (non-linear) classes, a model built on a balanced data set containing all border-relevant points will be still working perfectly on a skewed sample), classifying documents is often a game of probabilities and hence class skew is more problematic.
My suggestions:
Built the model on the imbalanced set with the same proportions as in production. If you have to sample for this, then perform multiple runs across different samples during validation to improve generalization power.
The "bias" towards the negative class in an imbalanced set originates from the-best-guess-is-majority-class-if-everything-else-is-equal, something which Naive Bayes is sensitive to (especially when a lot of (irrelevant) features are involved). Use a different classifier which can capture inter-feature/word-dependencies to reduce this. I'd try Gradient Boosting with trees as described in chapter 10 "Boosting and Additive Trees" of The elements of statistical learning.
You are currently using "plain precision / recall" as metric. Based on your productions requirements, estimate whether a false positive is equally bad as a false negative and adjust the metric accordingly.
|
Naive Bayes: Imbalanced Dataset in Real-time Scenario
To create a good model, the model has to be built on training data which is of the same "structure" as the data the model will applied later on. This is the one boring assumption which underlies all c
|
44,140
|
Naive Bayes: Imbalanced Dataset in Real-time Scenario
|
In the paper "Tackling the Poor Assumptions of Naive Bayes Text Classifiers" the authors deal with this problem, among others, which stem from the character of the naive bayes algorithm. Having highly skewed data leads to a bias in your weights, which causes the bad precision.
Concretely for the problem of skew data, what they proposed what they call the complementary naive bayes algorithm, where to train each class they use all data, but the sample from that class. The idea is that they get more even training sets.
They idea you mean is usually called stratified sampling, which is also available in scikit-learn, and is worth a try.
In addition to sampling methods, a smart method is to normalize the word counts to correct for weight bias as explained here (Naive Bayes for Text Classification with Unbalanced Classes).
The idea is to make the estimated conditional probabilities insensitive to skewed counts. If you have too few documents of one class, and the are comparable in length to those of the other class, when words appear more often in documents of one class, Naive Bayes will tend to associate them with documents of other classes. By normalizing the word counts across classes, this bias is compensated for.
Good luck!
|
Naive Bayes: Imbalanced Dataset in Real-time Scenario
|
In the paper "Tackling the Poor Assumptions of Naive Bayes Text Classifiers" the authors deal with this problem, among others, which stem from the character of the naive bayes algorithm. Having highly
|
Naive Bayes: Imbalanced Dataset in Real-time Scenario
In the paper "Tackling the Poor Assumptions of Naive Bayes Text Classifiers" the authors deal with this problem, among others, which stem from the character of the naive bayes algorithm. Having highly skewed data leads to a bias in your weights, which causes the bad precision.
Concretely for the problem of skew data, what they proposed what they call the complementary naive bayes algorithm, where to train each class they use all data, but the sample from that class. The idea is that they get more even training sets.
They idea you mean is usually called stratified sampling, which is also available in scikit-learn, and is worth a try.
In addition to sampling methods, a smart method is to normalize the word counts to correct for weight bias as explained here (Naive Bayes for Text Classification with Unbalanced Classes).
The idea is to make the estimated conditional probabilities insensitive to skewed counts. If you have too few documents of one class, and the are comparable in length to those of the other class, when words appear more often in documents of one class, Naive Bayes will tend to associate them with documents of other classes. By normalizing the word counts across classes, this bias is compensated for.
Good luck!
|
Naive Bayes: Imbalanced Dataset in Real-time Scenario
In the paper "Tackling the Poor Assumptions of Naive Bayes Text Classifiers" the authors deal with this problem, among others, which stem from the character of the naive bayes algorithm. Having highly
|
44,141
|
Naive Bayes: Imbalanced Dataset in Real-time Scenario
|
Any Bayesian classifier can be easily tweaked to incorporate knowledge about how often a particular class is expected. When you train a Bayesian classifier, two sets of parameters are learned:
P(C=c), the probability that an observation belongs to class C (the class prior probabilities)
P(F=f | C=c), the probability that an observation has the feature set F given that it belongs class C (i.e. its likelihood).
The classification rule is to choose a c that maximizes P(C=c)*P(F=f|C=c). (See: http://en.wikipedia.org/wiki/Naive_Bayes_classifier#Constructing_a_classifier_from_the_probability_model)
You can modify P(C=c) according to the expected occurrences of positive and negative observations in your production environment. Then your classification criterion will be optimal.
I wouldn't reduce the amount of positive observations in the training dataset. This will indeed change the prior probabilities to better match your test dataset. However, it will hurt the estimation of the likelihood parameters (since you won't use all available data). It's much better to use all available data and then modify the class prior probabilities according to your needs. When using discriminative classifiers such as SVM, the latter approach is less straightforward (since P(C=c) isn't explicitly modeled) and then the logic of keeping both the training and test datasets similarly (im)balanced makes sense.
|
Naive Bayes: Imbalanced Dataset in Real-time Scenario
|
Any Bayesian classifier can be easily tweaked to incorporate knowledge about how often a particular class is expected. When you train a Bayesian classifier, two sets of parameters are learned:
P(C=c)
|
Naive Bayes: Imbalanced Dataset in Real-time Scenario
Any Bayesian classifier can be easily tweaked to incorporate knowledge about how often a particular class is expected. When you train a Bayesian classifier, two sets of parameters are learned:
P(C=c), the probability that an observation belongs to class C (the class prior probabilities)
P(F=f | C=c), the probability that an observation has the feature set F given that it belongs class C (i.e. its likelihood).
The classification rule is to choose a c that maximizes P(C=c)*P(F=f|C=c). (See: http://en.wikipedia.org/wiki/Naive_Bayes_classifier#Constructing_a_classifier_from_the_probability_model)
You can modify P(C=c) according to the expected occurrences of positive and negative observations in your production environment. Then your classification criterion will be optimal.
I wouldn't reduce the amount of positive observations in the training dataset. This will indeed change the prior probabilities to better match your test dataset. However, it will hurt the estimation of the likelihood parameters (since you won't use all available data). It's much better to use all available data and then modify the class prior probabilities according to your needs. When using discriminative classifiers such as SVM, the latter approach is less straightforward (since P(C=c) isn't explicitly modeled) and then the logic of keeping both the training and test datasets similarly (im)balanced makes sense.
|
Naive Bayes: Imbalanced Dataset in Real-time Scenario
Any Bayesian classifier can be easily tweaked to incorporate knowledge about how often a particular class is expected. When you train a Bayesian classifier, two sets of parameters are learned:
P(C=c)
|
44,142
|
interaction of categorical and continuous variables
|
In the scenario you describe least squares regression will allow you to tell a very straightforward story:
First of all, imagine that you have no dichotomous independent variable. So:
(1) $y_{i} = \beta_{0} + \beta_{1}x_{1i} + \varepsilon_{i}$
Your regression describes the relationship between your dependent variable $y$ and your continuous independent variable $x_{1}$ as a straight line, with intercept $\beta_{0}$ and slope $\beta_{1}$. Cool? Cool.
Now add both the dichotomous independent variable $x_{2}$ and the interaction between $x_{1}$ and $x_{2}$ to the model:
(2) $y_{i} = \beta_{0} + \beta_{1}x_{1i} + \beta_{2}x_{2i} + \beta_{3}x_{1i}x_{2i} + \varepsilon_{i}$
So now what is your model telling you? Well, (assuming $x_{2}$ is coded 0/1) when $x_{2} = 0$, then the model reduces to equation (1) because $\beta_{2} \times 0 = 0$ and $\beta_{3} \times x_{1} \times 0 = 0$. So that is easy-peasy puddin' pie.
What about when $x_{2} =1$? Well now the $y$-intercept is $\beta_{0} + \beta_{2}$ (Right? Because $\beta_{2} \times 1 = \beta_{2}$).
And the slope of the line relating $y$ to $x_{1}$ is now $\beta_{1} + \beta_{3}$ (Right? Because $\beta_{1}\times x_{1} + \beta_{3} \times x_{1} \times 1 = \beta_{1}\times x_{1} + \beta_{3} \times x_{1} = (\beta_{1} + \beta_{3})\times x_{1}$).
So when $x_{2}=1$ you simply have a second regression line relating $y$ to $x_{1}$, with a different intercept (if $\beta_{2} \ne 0$) and a different slope (if $\beta_{3} \ne 0$ which will be true if you tested a significant interaction term in, say, ANOVA).
How do you communicate this? A single graph with two regression lines overlaying your data (possibly with different colored/shaped/sized markers when $x_{2}=1$), and a label indicating which line corresponds to $x_{2}=0$ and $x_{2}=1$. Also providing your audience with the values of the $\beta$s and their standard errors and/or confidence intervals is good (like, in a table of multiple regression results).
Cool? Cool.
Finally, while all the above tells you about trend relationships between $y$ and $x_{1}$ given $x_{2}$, least squares regression also tells you about strength of association. If you had a single independent variable, you'd probably want to use something like $R^{2}$ to describe this strength of association, but when you add variables $R^{2}$ doesn't quite mean what it did before. So you might use generalized $R^{2}$, or Pseudo-$R^{2}$ or some such.
|
interaction of categorical and continuous variables
|
In the scenario you describe least squares regression will allow you to tell a very straightforward story:
First of all, imagine that you have no dichotomous independent variable. So:
(1) $y_{i} = \be
|
interaction of categorical and continuous variables
In the scenario you describe least squares regression will allow you to tell a very straightforward story:
First of all, imagine that you have no dichotomous independent variable. So:
(1) $y_{i} = \beta_{0} + \beta_{1}x_{1i} + \varepsilon_{i}$
Your regression describes the relationship between your dependent variable $y$ and your continuous independent variable $x_{1}$ as a straight line, with intercept $\beta_{0}$ and slope $\beta_{1}$. Cool? Cool.
Now add both the dichotomous independent variable $x_{2}$ and the interaction between $x_{1}$ and $x_{2}$ to the model:
(2) $y_{i} = \beta_{0} + \beta_{1}x_{1i} + \beta_{2}x_{2i} + \beta_{3}x_{1i}x_{2i} + \varepsilon_{i}$
So now what is your model telling you? Well, (assuming $x_{2}$ is coded 0/1) when $x_{2} = 0$, then the model reduces to equation (1) because $\beta_{2} \times 0 = 0$ and $\beta_{3} \times x_{1} \times 0 = 0$. So that is easy-peasy puddin' pie.
What about when $x_{2} =1$? Well now the $y$-intercept is $\beta_{0} + \beta_{2}$ (Right? Because $\beta_{2} \times 1 = \beta_{2}$).
And the slope of the line relating $y$ to $x_{1}$ is now $\beta_{1} + \beta_{3}$ (Right? Because $\beta_{1}\times x_{1} + \beta_{3} \times x_{1} \times 1 = \beta_{1}\times x_{1} + \beta_{3} \times x_{1} = (\beta_{1} + \beta_{3})\times x_{1}$).
So when $x_{2}=1$ you simply have a second regression line relating $y$ to $x_{1}$, with a different intercept (if $\beta_{2} \ne 0$) and a different slope (if $\beta_{3} \ne 0$ which will be true if you tested a significant interaction term in, say, ANOVA).
How do you communicate this? A single graph with two regression lines overlaying your data (possibly with different colored/shaped/sized markers when $x_{2}=1$), and a label indicating which line corresponds to $x_{2}=0$ and $x_{2}=1$. Also providing your audience with the values of the $\beta$s and their standard errors and/or confidence intervals is good (like, in a table of multiple regression results).
Cool? Cool.
Finally, while all the above tells you about trend relationships between $y$ and $x_{1}$ given $x_{2}$, least squares regression also tells you about strength of association. If you had a single independent variable, you'd probably want to use something like $R^{2}$ to describe this strength of association, but when you add variables $R^{2}$ doesn't quite mean what it did before. So you might use generalized $R^{2}$, or Pseudo-$R^{2}$ or some such.
|
interaction of categorical and continuous variables
In the scenario you describe least squares regression will allow you to tell a very straightforward story:
First of all, imagine that you have no dichotomous independent variable. So:
(1) $y_{i} = \be
|
44,143
|
interaction of categorical and continuous variables
|
@Alexis seems to cover the equations pretty well. Here's some example code in r:
set.seed(8);d8a=data.frame(x=rnorm(99),z=rbinom(99,1,.5)) #Data sim'd to fit the scenario
d8a$y=(d8a$x+rnorm(99,0,3))*(2*d8a$z-1) #Guarantees an interaction
summary(lm(y~scale(x)*factor(z),d8a)) #Fits a GLM with OLS – this is the part you need
$$\rm Output$$
Residuals:
Min 1Q Median 3Q Max
-6.1575 -2.1416 -0.2051 1.8558 6.5765
Coefficients:
Estimate Std. Error t value Pr(>|t|)
(Intercept) -0.17374 0.43557 -0.399 0.690867
scale(x) -1.11354 0.45224 -2.462 0.015608 *
factor(z)1 0.01546 0.58976 0.026 0.979144
scale(x):factor(z)1 2.24831 0.59689 3.767 0.000287 ***
---
Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
Residual standard error: 2.922 on 95 degrees of freedom
Multiple R-squared: 0.1328, Adjusted R-squared: 0.1054
F-statistic: 4.85 on 3 and 95 DF, p-value: 0.003492
$$\rm Plot$$
require(ggplot2);ggplot(d8a,aes(x,y,color=factor(z)))+stat_smooth(method=lm)+geom_point()
|
interaction of categorical and continuous variables
|
@Alexis seems to cover the equations pretty well. Here's some example code in r:
set.seed(8);d8a=data.frame(x=rnorm(99),z=rbinom(99,1,.5)) #Data sim'd to fit the scenario
d8a$y=(d8a$x+rnorm(99,0,3)
|
interaction of categorical and continuous variables
@Alexis seems to cover the equations pretty well. Here's some example code in r:
set.seed(8);d8a=data.frame(x=rnorm(99),z=rbinom(99,1,.5)) #Data sim'd to fit the scenario
d8a$y=(d8a$x+rnorm(99,0,3))*(2*d8a$z-1) #Guarantees an interaction
summary(lm(y~scale(x)*factor(z),d8a)) #Fits a GLM with OLS – this is the part you need
$$\rm Output$$
Residuals:
Min 1Q Median 3Q Max
-6.1575 -2.1416 -0.2051 1.8558 6.5765
Coefficients:
Estimate Std. Error t value Pr(>|t|)
(Intercept) -0.17374 0.43557 -0.399 0.690867
scale(x) -1.11354 0.45224 -2.462 0.015608 *
factor(z)1 0.01546 0.58976 0.026 0.979144
scale(x):factor(z)1 2.24831 0.59689 3.767 0.000287 ***
---
Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
Residual standard error: 2.922 on 95 degrees of freedom
Multiple R-squared: 0.1328, Adjusted R-squared: 0.1054
F-statistic: 4.85 on 3 and 95 DF, p-value: 0.003492
$$\rm Plot$$
require(ggplot2);ggplot(d8a,aes(x,y,color=factor(z)))+stat_smooth(method=lm)+geom_point()
|
interaction of categorical and continuous variables
@Alexis seems to cover the equations pretty well. Here's some example code in r:
set.seed(8);d8a=data.frame(x=rnorm(99),z=rbinom(99,1,.5)) #Data sim'd to fit the scenario
d8a$y=(d8a$x+rnorm(99,0,3)
|
44,144
|
What does "a distribution over distributions" mean?
|
Suppose there are boxes with chocolates, with some portion of dark and sweet chocolates. And you are interested in eating them (chocolates, not - boxes).
You pick at random one of the boxes. (Some kinds of boxes can be more common than others.) Then, you can pick at random one of the chocolates.
So you have a distribution (a collection of boxes) of distributions (chocolates in a box).
|
What does "a distribution over distributions" mean?
|
Suppose there are boxes with chocolates, with some portion of dark and sweet chocolates. And you are interested in eating them (chocolates, not - boxes).
You pick at random one of the boxes. (Some kin
|
What does "a distribution over distributions" mean?
Suppose there are boxes with chocolates, with some portion of dark and sweet chocolates. And you are interested in eating them (chocolates, not - boxes).
You pick at random one of the boxes. (Some kinds of boxes can be more common than others.) Then, you can pick at random one of the chocolates.
So you have a distribution (a collection of boxes) of distributions (chocolates in a box).
|
What does "a distribution over distributions" mean?
Suppose there are boxes with chocolates, with some portion of dark and sweet chocolates. And you are interested in eating them (chocolates, not - boxes).
You pick at random one of the boxes. (Some kin
|
44,145
|
What does "a distribution over distributions" mean?
|
Suppose we are going to play a game in which I will flip a coin. If the coin is heads (H) then you win, if the coin is tails (T) then I win. To figure out whether to play the game, you would like to know the probability of H, P(H), and the probability of tails, P(T).
We could write down these two probabilities in a list format, just for record keeping: [P(H), P(T)]. So now, we have a discrete distribution over the possible outcomes, P(H) for H and P(T) for T. Let's call this list "L", so L = [P(H), P(T)] and if we know what L is then we know the distribution over the possible outcomes of the game.
But let's go one step further. Let's say that because I've spent a long time studying math in my life, there is a 3/4 chance that I wake up cranky on any given day. So there is a 1/4 chance that I wake up feeling happy.
Let's say that if I wake up cranky in the morning, I will pick a coin that has P(H) = 1/10 and P(T) = 9/10. But if I wake up happy, I will pick a coin with P(H) = 1/2 and P(T) = 1/2.
In that case, there would be $L_{cranky}$ = [1/10, 9/10], and $L_{happy}$ = [1/2, 1/2].
So what will the actual list of probabilities, plain old L, be for this game?
With a 3/4 chance, the list L will be $L_{cranky}$ and with a 1/4 chance L will be $L_{happy}$.
So here we have a discrete probability distribution but the values that it describes are themselves lists (L) containing probabilities. So this distribution is like a "meta" distribution to the eventual coin-based game we will play.
It is a probability distribution over a space of outcomes where the outcomes are each a probability distribution.
|
What does "a distribution over distributions" mean?
|
Suppose we are going to play a game in which I will flip a coin. If the coin is heads (H) then you win, if the coin is tails (T) then I win. To figure out whether to play the game, you would like to k
|
What does "a distribution over distributions" mean?
Suppose we are going to play a game in which I will flip a coin. If the coin is heads (H) then you win, if the coin is tails (T) then I win. To figure out whether to play the game, you would like to know the probability of H, P(H), and the probability of tails, P(T).
We could write down these two probabilities in a list format, just for record keeping: [P(H), P(T)]. So now, we have a discrete distribution over the possible outcomes, P(H) for H and P(T) for T. Let's call this list "L", so L = [P(H), P(T)] and if we know what L is then we know the distribution over the possible outcomes of the game.
But let's go one step further. Let's say that because I've spent a long time studying math in my life, there is a 3/4 chance that I wake up cranky on any given day. So there is a 1/4 chance that I wake up feeling happy.
Let's say that if I wake up cranky in the morning, I will pick a coin that has P(H) = 1/10 and P(T) = 9/10. But if I wake up happy, I will pick a coin with P(H) = 1/2 and P(T) = 1/2.
In that case, there would be $L_{cranky}$ = [1/10, 9/10], and $L_{happy}$ = [1/2, 1/2].
So what will the actual list of probabilities, plain old L, be for this game?
With a 3/4 chance, the list L will be $L_{cranky}$ and with a 1/4 chance L will be $L_{happy}$.
So here we have a discrete probability distribution but the values that it describes are themselves lists (L) containing probabilities. So this distribution is like a "meta" distribution to the eventual coin-based game we will play.
It is a probability distribution over a space of outcomes where the outcomes are each a probability distribution.
|
What does "a distribution over distributions" mean?
Suppose we are going to play a game in which I will flip a coin. If the coin is heads (H) then you win, if the coin is tails (T) then I win. To figure out whether to play the game, you would like to k
|
44,146
|
Intuitive understanding of regularization
|
overfitting is always bad as it means you have done something to your model that means that it generalisation performance has become worse. This is less likely to happen when you have lots of data, and in such circumstances regularisation tends to be less helpful, but over-fitting is still something you don't want.
This diagram (from Wikimedia) shows an over-fitted regression model
In order for the regression line to pass through each of the data points, the regression has high curvature at many points. At these points, the output of the model is very sensitive to changes in the value of the input variable. This generally requires model parameters of large magnitude, so that small changes in the input are magnified into large changes in the output.
No, regularisation is not always needed, particularly if you have so much data that the model isn't flexible enough to exploit the noise. I would recommend putting regularisation in and use cross-validation to set the regularisation parameter(s). If regularisation is unhelpful, cross-validation will tend to make the regularisation parameter small enough that it has no real effect. I tend to use leave-one-out cross-validaition as it can be computed very cheaply for many interesting models (linear regression, SVMs, kernel machines, Gaussian processes etc.), even though its high variance is less attractive.
|
Intuitive understanding of regularization
|
overfitting is always bad as it means you have done something to your model that means that it generalisation performance has become worse. This is less likely to happen when you have lots of data, a
|
Intuitive understanding of regularization
overfitting is always bad as it means you have done something to your model that means that it generalisation performance has become worse. This is less likely to happen when you have lots of data, and in such circumstances regularisation tends to be less helpful, but over-fitting is still something you don't want.
This diagram (from Wikimedia) shows an over-fitted regression model
In order for the regression line to pass through each of the data points, the regression has high curvature at many points. At these points, the output of the model is very sensitive to changes in the value of the input variable. This generally requires model parameters of large magnitude, so that small changes in the input are magnified into large changes in the output.
No, regularisation is not always needed, particularly if you have so much data that the model isn't flexible enough to exploit the noise. I would recommend putting regularisation in and use cross-validation to set the regularisation parameter(s). If regularisation is unhelpful, cross-validation will tend to make the regularisation parameter small enough that it has no real effect. I tend to use leave-one-out cross-validaition as it can be computed very cheaply for many interesting models (linear regression, SVMs, kernel machines, Gaussian processes etc.), even though its high variance is less attractive.
|
Intuitive understanding of regularization
overfitting is always bad as it means you have done something to your model that means that it generalisation performance has become worse. This is less likely to happen when you have lots of data, a
|
44,147
|
Intuitive understanding of regularization
|
It depends on your model & the specificity of your data. For instance, fitting an unpruned decision tree will always lead to overfitting with even just a few variables. The same goes with parametric models, where a large number of parameters can lead to overfitting, even if there is a lot of data.
Eitherway, you sould systematically try to tune the complexity of your model to assure the best generalisation error.
I don't think it's as much a problem of "large" weights than "unconstrained" weights. Adding a regularisation term to regression basically forces your coefficients to a region near zero (or some other predefined prior value(s) ). The bayesian interpretation of regularisation makes it even more obvious : the regularisation parameter (for ridge regression, say) governs the standard deviation of the prior given to the coefficients. High regularisation means a higher "chance" for small coefficients, and constrains their value, thus reducing the freedom of your model and thus overfitting.
Depends on your model. If you've got huge amounts of data and 2 variables and are doing linear regression, then probably not. If you're fitting polynomials to 100 data points, then yeah. If you're unsure how complex your model is w.r.t. your training data, then just try with small amounts of regularisation and see if generalisation error improves (using a validation set or X-validation).
|
Intuitive understanding of regularization
|
It depends on your model & the specificity of your data. For instance, fitting an unpruned decision tree will always lead to overfitting with even just a few variables. The same goes with parametric m
|
Intuitive understanding of regularization
It depends on your model & the specificity of your data. For instance, fitting an unpruned decision tree will always lead to overfitting with even just a few variables. The same goes with parametric models, where a large number of parameters can lead to overfitting, even if there is a lot of data.
Eitherway, you sould systematically try to tune the complexity of your model to assure the best generalisation error.
I don't think it's as much a problem of "large" weights than "unconstrained" weights. Adding a regularisation term to regression basically forces your coefficients to a region near zero (or some other predefined prior value(s) ). The bayesian interpretation of regularisation makes it even more obvious : the regularisation parameter (for ridge regression, say) governs the standard deviation of the prior given to the coefficients. High regularisation means a higher "chance" for small coefficients, and constrains their value, thus reducing the freedom of your model and thus overfitting.
Depends on your model. If you've got huge amounts of data and 2 variables and are doing linear regression, then probably not. If you're fitting polynomials to 100 data points, then yeah. If you're unsure how complex your model is w.r.t. your training data, then just try with small amounts of regularisation and see if generalisation error improves (using a validation set or X-validation).
|
Intuitive understanding of regularization
It depends on your model & the specificity of your data. For instance, fitting an unpruned decision tree will always lead to overfitting with even just a few variables. The same goes with parametric m
|
44,148
|
Intuitive understanding of regularization
|
Is overfitting bad when we have really a lot of data?
Overfitting with lot of data is still overfitting and overfitting is bad.
I don't understand why "very large weights fit the training data very
well"?
I found an example in Deep Learning by Goodfellow(page 293):
Suppose we apply logistic regression to a problem where the classes are linearly separable. If a set of weights $w$ make the model fit the data very well, then it's clear that $2w$ would provide us higher likelihood. And in theory after many iterations of opotimization this increasing would never hald.
Is regularization always needed?
This question seems equal to: does overfitting always occur? If there exists data the model has never seen during training, the overfitting may occur and hence regularization is necessary. It seems that hardly can we train the model with all possible cases, then normally regularization is always necessary.
|
Intuitive understanding of regularization
|
Is overfitting bad when we have really a lot of data?
Overfitting with lot of data is still overfitting and overfitting is bad.
I don't understand why "very large weights fit the training data very
|
Intuitive understanding of regularization
Is overfitting bad when we have really a lot of data?
Overfitting with lot of data is still overfitting and overfitting is bad.
I don't understand why "very large weights fit the training data very
well"?
I found an example in Deep Learning by Goodfellow(page 293):
Suppose we apply logistic regression to a problem where the classes are linearly separable. If a set of weights $w$ make the model fit the data very well, then it's clear that $2w$ would provide us higher likelihood. And in theory after many iterations of opotimization this increasing would never hald.
Is regularization always needed?
This question seems equal to: does overfitting always occur? If there exists data the model has never seen during training, the overfitting may occur and hence regularization is necessary. It seems that hardly can we train the model with all possible cases, then normally regularization is always necessary.
|
Intuitive understanding of regularization
Is overfitting bad when we have really a lot of data?
Overfitting with lot of data is still overfitting and overfitting is bad.
I don't understand why "very large weights fit the training data very
|
44,149
|
Can the performance of a deterministic model be evaluated without an estimate of model uncertainty?
|
Model accuracy can be defined as the difference between the model prediction and truth expressed in terms of squared error. So model accuracy is $E([T_{model}-T]^2)$ However you don't know the true $T$. But you say the you have $T_{obs}$ and you know its error distribution. So based on your assumption $E([T_{obs}-T]^2)=\sigma^2$.
Now $E([T_{model}-T_{obs}]^2)$ is unknown but can be estimated from the average squared difference between the model prediction and the observed value. What you are interested in is $E([T_{model}-T]^2)$. Add and subtract $T_{obs}$ inside the brackets and expand.
After a few algebra steps you get $$E([T_{model}-T]^2)= E([T_{model}-T_{obs}]^2)+ E([T_{obs}-T]^2 + E([T_{model}-T_{obs}] [T_{obs}-T]).$$
Note that the term $E([T_{obs}-T]^2) = \sigma^2$ and, since $T_{model}$ is independent of $T_{obs}$, $$E([T_{model}-T_{obs}] [T_{obs}-T])= E(T_{model} -T_{obs}) E(T_{obs}-T)$$ and by assuming the error in $T_{obs}$ is $\rm N(0,\sigma^2)$, $E(T_{obs}-T)=0.$
So we have the variance decomposition $E([T_{model}-T]^2)= E([T_{model}-T_{obs}]^2)+\sigma^2$.
So we can estimate the model error by taking the estimate for $E([T_{model}-T_{obs}]^2)$ and adding it to the known $\sigma^2$ .
However if you want to assess the uncertainty in the estimate of $E([T_{model}-T_{obs}]^2)$ you still need to get a sampling distribution for it under the null hypothesis which amounts to still doing what I recommended in my answer to Steven's previous question.
|
Can the performance of a deterministic model be evaluated without an estimate of model uncertainty?
|
Model accuracy can be defined as the difference between the model prediction and truth expressed in terms of squared error. So model accuracy is $E([T_{model}-T]^2)$ However you don't know the true $
|
Can the performance of a deterministic model be evaluated without an estimate of model uncertainty?
Model accuracy can be defined as the difference between the model prediction and truth expressed in terms of squared error. So model accuracy is $E([T_{model}-T]^2)$ However you don't know the true $T$. But you say the you have $T_{obs}$ and you know its error distribution. So based on your assumption $E([T_{obs}-T]^2)=\sigma^2$.
Now $E([T_{model}-T_{obs}]^2)$ is unknown but can be estimated from the average squared difference between the model prediction and the observed value. What you are interested in is $E([T_{model}-T]^2)$. Add and subtract $T_{obs}$ inside the brackets and expand.
After a few algebra steps you get $$E([T_{model}-T]^2)= E([T_{model}-T_{obs}]^2)+ E([T_{obs}-T]^2 + E([T_{model}-T_{obs}] [T_{obs}-T]).$$
Note that the term $E([T_{obs}-T]^2) = \sigma^2$ and, since $T_{model}$ is independent of $T_{obs}$, $$E([T_{model}-T_{obs}] [T_{obs}-T])= E(T_{model} -T_{obs}) E(T_{obs}-T)$$ and by assuming the error in $T_{obs}$ is $\rm N(0,\sigma^2)$, $E(T_{obs}-T)=0.$
So we have the variance decomposition $E([T_{model}-T]^2)= E([T_{model}-T_{obs}]^2)+\sigma^2$.
So we can estimate the model error by taking the estimate for $E([T_{model}-T_{obs}]^2)$ and adding it to the known $\sigma^2$ .
However if you want to assess the uncertainty in the estimate of $E([T_{model}-T_{obs}]^2)$ you still need to get a sampling distribution for it under the null hypothesis which amounts to still doing what I recommended in my answer to Steven's previous question.
|
Can the performance of a deterministic model be evaluated without an estimate of model uncertainty?
Model accuracy can be defined as the difference between the model prediction and truth expressed in terms of squared error. So model accuracy is $E([T_{model}-T]^2)$ However you don't know the true $
|
44,150
|
Can the performance of a deterministic model be evaluated without an estimate of model uncertainty?
|
Given that there is $N(0, \sigma)$ error in your observation, then the likelihood of the observation $T_{obs}$ given measurements $x$ is $L(T_{obs}|x) = N(T_{obs}; g(x), \sigma)$. One would need multiple measurements and temperature observations to have an estimateof $\sigma$, e.g. maximum likelihood. This is the gist of regression as noted in another answer.
An alternative to the parametric form for the error surrounding model estimates is called semi-parametric regression. For example, one could fit the model to measurements and then bootstrap the residuals. Another, more sophisticated approach involves Gaussian processes. Generally, semi-parametric regression is useful when assumptions such as homoscedastic errors are unrealistic. For example, the model $g(x)$ might be more consistent in predicting small temperatures and noisier in estimating large values.
|
Can the performance of a deterministic model be evaluated without an estimate of model uncertainty?
|
Given that there is $N(0, \sigma)$ error in your observation, then the likelihood of the observation $T_{obs}$ given measurements $x$ is $L(T_{obs}|x) = N(T_{obs}; g(x), \sigma)$. One would need mult
|
Can the performance of a deterministic model be evaluated without an estimate of model uncertainty?
Given that there is $N(0, \sigma)$ error in your observation, then the likelihood of the observation $T_{obs}$ given measurements $x$ is $L(T_{obs}|x) = N(T_{obs}; g(x), \sigma)$. One would need multiple measurements and temperature observations to have an estimateof $\sigma$, e.g. maximum likelihood. This is the gist of regression as noted in another answer.
An alternative to the parametric form for the error surrounding model estimates is called semi-parametric regression. For example, one could fit the model to measurements and then bootstrap the residuals. Another, more sophisticated approach involves Gaussian processes. Generally, semi-parametric regression is useful when assumptions such as homoscedastic errors are unrealistic. For example, the model $g(x)$ might be more consistent in predicting small temperatures and noisier in estimating large values.
|
Can the performance of a deterministic model be evaluated without an estimate of model uncertainty?
Given that there is $N(0, \sigma)$ error in your observation, then the likelihood of the observation $T_{obs}$ given measurements $x$ is $L(T_{obs}|x) = N(T_{obs}; g(x), \sigma)$. One would need mult
|
44,151
|
Can the performance of a deterministic model be evaluated without an estimate of model uncertainty?
|
The state of the art in Meteorological forecasting is Ensemble Forecasting. This has only become possible in the last few years because of advances in computing power and the corresponding reduction of the cost of computing.
Ensemble forecasting tries to address the problem of how to get realistic probabilities from deterministic models. The basic idea is that the state a model is initialised with (all the pressures, temperatures, densities etc at every grid cell) is not known with certainty. We might know the state very well at locations where we can measure it, but the whole vertical profile in principle needs to be known everywhere. Modern deterministic models can actually do very well if all of this is known, but it is impossible to do in practice as we only have measurements in certain places at certain times.
With this in mind, the initial conditions are randomly perturbed based on the best understanding of the probable distribution of initial conditions, based on available measurements. For each randomly perturbed set of initial conditions the deterministic model is run to observe the likely range of final states given the uncertainty in the initial states.
In practice there is a fine art to this because the models aren't perfect, so if the above process is followed the final distribution is too restrictive compared with reality and extra model uncertainty is injected using a variety of different approaches. In general it takes a fair bit of tweaking to accurately calibrate an ensemble forecasting system with respect to historical data. This is a whole field of study in itself which encompasses physics, numerical methods and statistics.
|
Can the performance of a deterministic model be evaluated without an estimate of model uncertainty?
|
The state of the art in Meteorological forecasting is Ensemble Forecasting. This has only become possible in the last few years because of advances in computing power and the corresponding reduction o
|
Can the performance of a deterministic model be evaluated without an estimate of model uncertainty?
The state of the art in Meteorological forecasting is Ensemble Forecasting. This has only become possible in the last few years because of advances in computing power and the corresponding reduction of the cost of computing.
Ensemble forecasting tries to address the problem of how to get realistic probabilities from deterministic models. The basic idea is that the state a model is initialised with (all the pressures, temperatures, densities etc at every grid cell) is not known with certainty. We might know the state very well at locations where we can measure it, but the whole vertical profile in principle needs to be known everywhere. Modern deterministic models can actually do very well if all of this is known, but it is impossible to do in practice as we only have measurements in certain places at certain times.
With this in mind, the initial conditions are randomly perturbed based on the best understanding of the probable distribution of initial conditions, based on available measurements. For each randomly perturbed set of initial conditions the deterministic model is run to observe the likely range of final states given the uncertainty in the initial states.
In practice there is a fine art to this because the models aren't perfect, so if the above process is followed the final distribution is too restrictive compared with reality and extra model uncertainty is injected using a variety of different approaches. In general it takes a fair bit of tweaking to accurately calibrate an ensemble forecasting system with respect to historical data. This is a whole field of study in itself which encompasses physics, numerical methods and statistics.
|
Can the performance of a deterministic model be evaluated without an estimate of model uncertainty?
The state of the art in Meteorological forecasting is Ensemble Forecasting. This has only become possible in the last few years because of advances in computing power and the corresponding reduction o
|
44,152
|
Can the performance of a deterministic model be evaluated without an estimate of model uncertainty?
|
Typically, statistical models (i.e., models of data) have a random component (also sometimes called a 'stochastic component'). For example, a model might be:
$$
Y=X\beta+\epsilon \\
\text{where }\epsilon\sim\mathcal{N}(0,\sigma^2)
$$
This example is a basic regression model. The $X\beta$ is called the structural component, and the $\epsilon$ is the random component. However, models are often written and discussed in terms of the predicted value or the expected value. The same model could be put:
$$
\hat{Y}=X\beta \\
\text{or} \\
E(Y)=X\beta
$$
The random component still exists, but is implicit.
You have described a deterministic model. (Note that I don't know much about meteorology or weather forecasting, so I can't say anything about how normal or appropriate that might be in the field.) At any rate, the model makes a simple prediction--it should be fairly simple to assess: the prediction either matches the observation, or it doesn't.
One oddity is that there seems to be a separate model of the intrinsic measurement error of the observations. I would think that the measurement of temperature has advanced to the point where this is inconsequential, but you could certainly assess the performance of the model over repeated observations, and see if the predictions fall within a $(1-\alpha)\% CI$, $(1-\alpha)\%$ of the time. My first guess is that the error in weather prediction will swamp the measurement error in the observation of temperature, and so I would have expected that people would not spend time on a deterministic model, but would include a random component directly into the primary model, which would mean they could be evaluated just like any normal statistical model would.
|
Can the performance of a deterministic model be evaluated without an estimate of model uncertainty?
|
Typically, statistical models (i.e., models of data) have a random component (also sometimes called a 'stochastic component'). For example, a model might be:
$$
Y=X\beta+\epsilon \\
\text{where }\ep
|
Can the performance of a deterministic model be evaluated without an estimate of model uncertainty?
Typically, statistical models (i.e., models of data) have a random component (also sometimes called a 'stochastic component'). For example, a model might be:
$$
Y=X\beta+\epsilon \\
\text{where }\epsilon\sim\mathcal{N}(0,\sigma^2)
$$
This example is a basic regression model. The $X\beta$ is called the structural component, and the $\epsilon$ is the random component. However, models are often written and discussed in terms of the predicted value or the expected value. The same model could be put:
$$
\hat{Y}=X\beta \\
\text{or} \\
E(Y)=X\beta
$$
The random component still exists, but is implicit.
You have described a deterministic model. (Note that I don't know much about meteorology or weather forecasting, so I can't say anything about how normal or appropriate that might be in the field.) At any rate, the model makes a simple prediction--it should be fairly simple to assess: the prediction either matches the observation, or it doesn't.
One oddity is that there seems to be a separate model of the intrinsic measurement error of the observations. I would think that the measurement of temperature has advanced to the point where this is inconsequential, but you could certainly assess the performance of the model over repeated observations, and see if the predictions fall within a $(1-\alpha)\% CI$, $(1-\alpha)\%$ of the time. My first guess is that the error in weather prediction will swamp the measurement error in the observation of temperature, and so I would have expected that people would not spend time on a deterministic model, but would include a random component directly into the primary model, which would mean they could be evaluated just like any normal statistical model would.
|
Can the performance of a deterministic model be evaluated without an estimate of model uncertainty?
Typically, statistical models (i.e., models of data) have a random component (also sometimes called a 'stochastic component'). For example, a model might be:
$$
Y=X\beta+\epsilon \\
\text{where }\ep
|
44,153
|
Interpreting main effect and interaction
|
In general, you should not base your model selection solely on statistical significance. Substantive meaning is more important.
In this particular case, you can graph the predicted values for males and females, with the x-axis being income and the y-axis the number of items bought, and a line for each gender.
@gung makes a good point that, if the y-variable is a count, you should use an appropriate model, such as Poisson regression, or, more likely, negative binomial regression, since over-dispersion is very common in count regression.
|
Interpreting main effect and interaction
|
In general, you should not base your model selection solely on statistical significance. Substantive meaning is more important.
In this particular case, you can graph the predicted values for males an
|
Interpreting main effect and interaction
In general, you should not base your model selection solely on statistical significance. Substantive meaning is more important.
In this particular case, you can graph the predicted values for males and females, with the x-axis being income and the y-axis the number of items bought, and a line for each gender.
@gung makes a good point that, if the y-variable is a count, you should use an appropriate model, such as Poisson regression, or, more likely, negative binomial regression, since over-dispersion is very common in count regression.
|
Interpreting main effect and interaction
In general, you should not base your model selection solely on statistical significance. Substantive meaning is more important.
In this particular case, you can graph the predicted values for males an
|
44,154
|
Interpreting main effect and interaction
|
Your results suggest that there is no interaction--you simply have a main effect of X1. You could say something like, "The number of tubs of ice-cream people buy is related to their income. For instance, if person A's income is one unit higher than person B's income, person A typically buys $\beta_1$ more tubs of ice-cream than person B. Our data suggest that this relationship between income and ice-cream buying is similar for both men and women."
(Incidentally, if your response variable is a count, you should use Poisson regression rather than the general linear model, but I don't know if that's just your example.)
|
Interpreting main effect and interaction
|
Your results suggest that there is no interaction--you simply have a main effect of X1. You could say something like, "The number of tubs of ice-cream people buy is related to their income. For inst
|
Interpreting main effect and interaction
Your results suggest that there is no interaction--you simply have a main effect of X1. You could say something like, "The number of tubs of ice-cream people buy is related to their income. For instance, if person A's income is one unit higher than person B's income, person A typically buys $\beta_1$ more tubs of ice-cream than person B. Our data suggest that this relationship between income and ice-cream buying is similar for both men and women."
(Incidentally, if your response variable is a count, you should use Poisson regression rather than the general linear model, but I don't know if that's just your example.)
|
Interpreting main effect and interaction
Your results suggest that there is no interaction--you simply have a main effect of X1. You could say something like, "The number of tubs of ice-cream people buy is related to their income. For inst
|
44,155
|
What does "20/ln(2)" mean in logistic regression?
|
This is a common scaling factor used for credit scoring models built with logistic regression.
The interpretation of the dependent variable in logistic regression is as log odds, but in credit scoring, we like to deal in points, thus a scaling factor is applied to the log odds to convert to the point system.
A widely used convention in credit scoring is the concept of "Points to Double the Odds" (often abbreviated PDO), and this is the source of the $\ln (2)$ in the question. For example, how many points does the score change if the odds increase from 100:1 to 200:1.
A common default value for PDO is 20, because it produces credit score ranges that people tend to like.
So, the interpretation of the $20/\ln(2)$ is that for a 20-point increase in score, the odds double.
|
What does "20/ln(2)" mean in logistic regression?
|
This is a common scaling factor used for credit scoring models built with logistic regression.
The interpretation of the dependent variable in logistic regression is as log odds, but in credit scoring
|
What does "20/ln(2)" mean in logistic regression?
This is a common scaling factor used for credit scoring models built with logistic regression.
The interpretation of the dependent variable in logistic regression is as log odds, but in credit scoring, we like to deal in points, thus a scaling factor is applied to the log odds to convert to the point system.
A widely used convention in credit scoring is the concept of "Points to Double the Odds" (often abbreviated PDO), and this is the source of the $\ln (2)$ in the question. For example, how many points does the score change if the odds increase from 100:1 to 200:1.
A common default value for PDO is 20, because it produces credit score ranges that people tend to like.
So, the interpretation of the $20/\ln(2)$ is that for a 20-point increase in score, the odds double.
|
What does "20/ln(2)" mean in logistic regression?
This is a common scaling factor used for credit scoring models built with logistic regression.
The interpretation of the dependent variable in logistic regression is as log odds, but in credit scoring
|
44,156
|
What does "20/ln(2)" mean in logistic regression?
|
Typically in credit scoring one would choose a baseline score e.g. 600. We assign a certain meaning to 600 for example, 600 means the good bad odd is 30:1 (where bad typically means a default, the default definition is typically 90 days past payment due on the loan, however the bad definition can vary). Typically they also define that a 20 point jump means doubling of odds, for example 620 means the good bad odd if 60:1 and 640 is 120:1 etc. This definition comes from logistic regression.
If we fit a logistic regression model the model being fitted is this
$$\log(p/(1-p)) = a + b_1d_1 + \cdots + b_n d_n$$
where $a$ and $b_i$ are parameter estimates and $p$ is the probability of good $d_i$ are your raw data (explanatory variables). The LHS is the log good bad odds. To conform to the above mentioned standard (i.e. 600 is 30:1 and 620 is 60:1) we scale the RHS using $c$ and $d$ found by solving these simultaneous equations
$$600 = c\log(30/1) + d = c + d(a + b_1d_1 + \cdots + b_n d_n)$$
$$620 = c\log(60/1) + d = c + d(a + b_1d_1 + \cdots + b_n d_n)$$
you will get $c = 20/\log(2)$. Hence scaling the RHS by $c$ and $d$ will give you the scores you want. Hence we see that $20/\log(2)$ is just to achieve the 20 points to double odds mantra.
|
What does "20/ln(2)" mean in logistic regression?
|
Typically in credit scoring one would choose a baseline score e.g. 600. We assign a certain meaning to 600 for example, 600 means the good bad odd is 30:1 (where bad typically means a default, the def
|
What does "20/ln(2)" mean in logistic regression?
Typically in credit scoring one would choose a baseline score e.g. 600. We assign a certain meaning to 600 for example, 600 means the good bad odd is 30:1 (where bad typically means a default, the default definition is typically 90 days past payment due on the loan, however the bad definition can vary). Typically they also define that a 20 point jump means doubling of odds, for example 620 means the good bad odd if 60:1 and 640 is 120:1 etc. This definition comes from logistic regression.
If we fit a logistic regression model the model being fitted is this
$$\log(p/(1-p)) = a + b_1d_1 + \cdots + b_n d_n$$
where $a$ and $b_i$ are parameter estimates and $p$ is the probability of good $d_i$ are your raw data (explanatory variables). The LHS is the log good bad odds. To conform to the above mentioned standard (i.e. 600 is 30:1 and 620 is 60:1) we scale the RHS using $c$ and $d$ found by solving these simultaneous equations
$$600 = c\log(30/1) + d = c + d(a + b_1d_1 + \cdots + b_n d_n)$$
$$620 = c\log(60/1) + d = c + d(a + b_1d_1 + \cdots + b_n d_n)$$
you will get $c = 20/\log(2)$. Hence scaling the RHS by $c$ and $d$ will give you the scores you want. Hence we see that $20/\log(2)$ is just to achieve the 20 points to double odds mantra.
|
What does "20/ln(2)" mean in logistic regression?
Typically in credit scoring one would choose a baseline score e.g. 600. We assign a certain meaning to 600 for example, 600 means the good bad odd is 30:1 (where bad typically means a default, the def
|
44,157
|
What does "20/ln(2)" mean in logistic regression?
|
As for me all this theory wasn't that obvious I provide code with formulas to explain how all the "definitions" are translated into the resulting score.
import pandas as pd
import numpy as np
df=pd.DataFrame()
df['fc']=[206, 205, 200, 220, 230, 235, 236, 240,250]
df['cat']=[0, 1, 0, 0, 0, 1, 1, 1,0]
df['good']=[0, 1, 0, 0, 1, 0, 1, 1,1]
train=df[['fc','cat']]
y=df['good']
from sklearn.linear_model import LogisticRegression
clf=LogisticRegression(fit_intercept=True, solver='lbfgs')
clf=clf.fit(train, y)
coefficients = np.append (clf.intercept_, clf.coef_)
print('Coefficients', coefficients)
#Option 1: Predict proba
test=pd.DataFrame (np.array([200,1]).reshape(1,2))
y_pred=clf.predict_proba(test)[:,1]
print('Predict proba: ' ,y_pred)
#Option 2: Calculate Probability
ln_odds=sum(np.multiply(coefficients,np.array([1,200,1]))) # sum(coefficients*values)=ln(odds)
odds=np.exp(ln_odds)
prob_good=odds/(1+odds)
print('Resulting probablity: ', prob_good)
#score from Siddiqi
pdo=20
factor=pdo/np.log(2)
offset=200
score1=offset+factor*np.log(1) #p_bad=0.5, bad=good > odds=1
score2=offset+factor*np.log(2) #p_bad=0,3(3) good=2 bad=1
score3=offset+factor*np.log(4) #p_bad=0,2 good=4 bad=1
print(f'Difference 2 and 1: {score2-score1} \nDifference 3 and 2: {score3-score2}' )
'''To calculate score from logregression '''
#NB! in regression target 1 should be set to good as in Siddiqi odds are 100:1 meaning 100 good and 1 bad
score=offset-factor*sum(np.multiply(coefficients,np.array([1,200,1])))
print(f'Score from regression: {round(score,0)}')
#score from probability
score=offset+factor*np.log(prob_good/(1-prob_good))
print(f'Score from probability: {round(score,0)}')
|
What does "20/ln(2)" mean in logistic regression?
|
As for me all this theory wasn't that obvious I provide code with formulas to explain how all the "definitions" are translated into the resulting score.
import pandas as pd
import numpy as np
df=pd.D
|
What does "20/ln(2)" mean in logistic regression?
As for me all this theory wasn't that obvious I provide code with formulas to explain how all the "definitions" are translated into the resulting score.
import pandas as pd
import numpy as np
df=pd.DataFrame()
df['fc']=[206, 205, 200, 220, 230, 235, 236, 240,250]
df['cat']=[0, 1, 0, 0, 0, 1, 1, 1,0]
df['good']=[0, 1, 0, 0, 1, 0, 1, 1,1]
train=df[['fc','cat']]
y=df['good']
from sklearn.linear_model import LogisticRegression
clf=LogisticRegression(fit_intercept=True, solver='lbfgs')
clf=clf.fit(train, y)
coefficients = np.append (clf.intercept_, clf.coef_)
print('Coefficients', coefficients)
#Option 1: Predict proba
test=pd.DataFrame (np.array([200,1]).reshape(1,2))
y_pred=clf.predict_proba(test)[:,1]
print('Predict proba: ' ,y_pred)
#Option 2: Calculate Probability
ln_odds=sum(np.multiply(coefficients,np.array([1,200,1]))) # sum(coefficients*values)=ln(odds)
odds=np.exp(ln_odds)
prob_good=odds/(1+odds)
print('Resulting probablity: ', prob_good)
#score from Siddiqi
pdo=20
factor=pdo/np.log(2)
offset=200
score1=offset+factor*np.log(1) #p_bad=0.5, bad=good > odds=1
score2=offset+factor*np.log(2) #p_bad=0,3(3) good=2 bad=1
score3=offset+factor*np.log(4) #p_bad=0,2 good=4 bad=1
print(f'Difference 2 and 1: {score2-score1} \nDifference 3 and 2: {score3-score2}' )
'''To calculate score from logregression '''
#NB! in regression target 1 should be set to good as in Siddiqi odds are 100:1 meaning 100 good and 1 bad
score=offset-factor*sum(np.multiply(coefficients,np.array([1,200,1])))
print(f'Score from regression: {round(score,0)}')
#score from probability
score=offset+factor*np.log(prob_good/(1-prob_good))
print(f'Score from probability: {round(score,0)}')
|
What does "20/ln(2)" mean in logistic regression?
As for me all this theory wasn't that obvious I provide code with formulas to explain how all the "definitions" are translated into the resulting score.
import pandas as pd
import numpy as np
df=pd.D
|
44,158
|
Small sample linear regression: Where to start
|
I'd probably take a look at a ridge regression or, better, the lasso. These techniques are often used when there is multicollinearity. There are several options for doing this in R: See the Regularized and Shrinkage Methods section of the Machine Learning & Statistical Learning Task View on CRAN.
You don't have enough data to start thinking about some of the techniques listed in other sections of that Task View.
|
Small sample linear regression: Where to start
|
I'd probably take a look at a ridge regression or, better, the lasso. These techniques are often used when there is multicollinearity. There are several options for doing this in R: See the Regularize
|
Small sample linear regression: Where to start
I'd probably take a look at a ridge regression or, better, the lasso. These techniques are often used when there is multicollinearity. There are several options for doing this in R: See the Regularized and Shrinkage Methods section of the Machine Learning & Statistical Learning Task View on CRAN.
You don't have enough data to start thinking about some of the techniques listed in other sections of that Task View.
|
Small sample linear regression: Where to start
I'd probably take a look at a ridge regression or, better, the lasso. These techniques are often used when there is multicollinearity. There are several options for doing this in R: See the Regularize
|
44,159
|
Small sample linear regression: Where to start
|
It seems to me that the only thing worth doing here is testing a very focussed hypothesis, if you have one. But it seems like you don't.
With so few cases and so many variables, anything else would (in my opinion) be a fishing expedition. That could be a bit useful, perhaps, to generate an hypothesis to test with new data. But any results from a multivariate unfocussed analysis of these data is likely to be a false positive coincidental finding that probably won't hold up with new data.
|
Small sample linear regression: Where to start
|
It seems to me that the only thing worth doing here is testing a very focussed hypothesis, if you have one. But it seems like you don't.
With so few cases and so many variables, anything else would (
|
Small sample linear regression: Where to start
It seems to me that the only thing worth doing here is testing a very focussed hypothesis, if you have one. But it seems like you don't.
With so few cases and so many variables, anything else would (in my opinion) be a fishing expedition. That could be a bit useful, perhaps, to generate an hypothesis to test with new data. But any results from a multivariate unfocussed analysis of these data is likely to be a false positive coincidental finding that probably won't hold up with new data.
|
Small sample linear regression: Where to start
It seems to me that the only thing worth doing here is testing a very focussed hypothesis, if you have one. But it seems like you don't.
With so few cases and so many variables, anything else would (
|
44,160
|
Small sample linear regression: Where to start
|
I find @ucfagls's idea most appropriate here, since you have very few observations and a lot of variables. Ridge regression should do its job for prediction purpose.
Another way to analyse the data would be to rely on PLS regression (in this case, PLS1), which bears some idea with regression on PCA scores but seems more interesting in your case. As multicollinearity might be an issue there, you can look at sparse solution (see e.g., the spls or the mixOmics R packages).
|
Small sample linear regression: Where to start
|
I find @ucfagls's idea most appropriate here, since you have very few observations and a lot of variables. Ridge regression should do its job for prediction purpose.
Another way to analyse the data wo
|
Small sample linear regression: Where to start
I find @ucfagls's idea most appropriate here, since you have very few observations and a lot of variables. Ridge regression should do its job for prediction purpose.
Another way to analyse the data would be to rely on PLS regression (in this case, PLS1), which bears some idea with regression on PCA scores but seems more interesting in your case. As multicollinearity might be an issue there, you can look at sparse solution (see e.g., the spls or the mixOmics R packages).
|
Small sample linear regression: Where to start
I find @ucfagls's idea most appropriate here, since you have very few observations and a lot of variables. Ridge regression should do its job for prediction purpose.
Another way to analyse the data wo
|
44,161
|
Small sample linear regression: Where to start
|
If you're frustrated with too many correlations, and since you already have your covariance matrix (well almost) you could do a principal components analysis. You'll end up with fewer dimensions, which is probably fine considering your data set size, and what you end up with won't be intercorrelated anymore.
|
Small sample linear regression: Where to start
|
If you're frustrated with too many correlations, and since you already have your covariance matrix (well almost) you could do a principal components analysis. You'll end up with fewer dimensions, whic
|
Small sample linear regression: Where to start
If you're frustrated with too many correlations, and since you already have your covariance matrix (well almost) you could do a principal components analysis. You'll end up with fewer dimensions, which is probably fine considering your data set size, and what you end up with won't be intercorrelated anymore.
|
Small sample linear regression: Where to start
If you're frustrated with too many correlations, and since you already have your covariance matrix (well almost) you could do a principal components analysis. You'll end up with fewer dimensions, whic
|
44,162
|
Does higher variance usually mean lower probability density?
|
Up to a point. Because the density integrates to 1, the typical value of the density will be higher if the distribution has a lower variance and lower if it has a higher variance. For example, the maximum density of a Normal distribution with variance $\sigma^2$ is $1/(\sigma\sqrt{2\pi}$, which gets lower as $\sigma$ gets higher. On the other hand, because the distribution with larger variance is more spread out, it has higher density out in the tails. For example, here are two Normal distributions, with variance 1 and 4.
The distribution with variance 1 has higher density in the middle, but lower density at the edges.
However, if you have two distributions with different shapes, it's harder to make generalisations.
|
Does higher variance usually mean lower probability density?
|
Up to a point. Because the density integrates to 1, the typical value of the density will be higher if the distribution has a lower variance and lower if it has a higher variance. For example, the ma
|
Does higher variance usually mean lower probability density?
Up to a point. Because the density integrates to 1, the typical value of the density will be higher if the distribution has a lower variance and lower if it has a higher variance. For example, the maximum density of a Normal distribution with variance $\sigma^2$ is $1/(\sigma\sqrt{2\pi}$, which gets lower as $\sigma$ gets higher. On the other hand, because the distribution with larger variance is more spread out, it has higher density out in the tails. For example, here are two Normal distributions, with variance 1 and 4.
The distribution with variance 1 has higher density in the middle, but lower density at the edges.
However, if you have two distributions with different shapes, it's harder to make generalisations.
|
Does higher variance usually mean lower probability density?
Up to a point. Because the density integrates to 1, the typical value of the density will be higher if the distribution has a lower variance and lower if it has a higher variance. For example, the ma
|
44,163
|
Does higher variance usually mean lower probability density?
|
Standard deviation and probability density are exactly inversely correlated in one common and important case: scaled distributions, i.e. the distributions of $a\cdot X$ for different $a$ and same random variable $X$. Thomas' answer has a great example of this.
In the more general case, there does not have to be any relation. For example, take the mixture of two Gaussians at $+a$ and $-a$, with stdev $\sigma$ each. As long as $a \gg \sigma$ (so the Gaussians don't overlap), the maximum density is $1/(2\sqrt{2\pi}\, \sigma)$ and thus independent of $a$, while the variance $a^2 + \sigma^2$ depends strongly on $a$.
Edit: If we now increase $a$ while decreasing $\sigma$, variance and peak density both increase simultaneously, making it a true counterexample.
|
Does higher variance usually mean lower probability density?
|
Standard deviation and probability density are exactly inversely correlated in one common and important case: scaled distributions, i.e. the distributions of $a\cdot X$ for different $a$ and same rand
|
Does higher variance usually mean lower probability density?
Standard deviation and probability density are exactly inversely correlated in one common and important case: scaled distributions, i.e. the distributions of $a\cdot X$ for different $a$ and same random variable $X$. Thomas' answer has a great example of this.
In the more general case, there does not have to be any relation. For example, take the mixture of two Gaussians at $+a$ and $-a$, with stdev $\sigma$ each. As long as $a \gg \sigma$ (so the Gaussians don't overlap), the maximum density is $1/(2\sqrt{2\pi}\, \sigma)$ and thus independent of $a$, while the variance $a^2 + \sigma^2$ depends strongly on $a$.
Edit: If we now increase $a$ while decreasing $\sigma$, variance and peak density both increase simultaneously, making it a true counterexample.
|
Does higher variance usually mean lower probability density?
Standard deviation and probability density are exactly inversely correlated in one common and important case: scaled distributions, i.e. the distributions of $a\cdot X$ for different $a$ and same rand
|
44,164
|
Does higher variance usually mean lower probability density?
|
Response to updated question: If I sample 100 data points from two distributions of the same type, but one with a lower variance and one with a higher variance, would the former one have higher likelihood?
The likelihood $L(\theta|X)$ depends on both the parameter $\theta$ and the random sample $X$, so it's hard to know what is meant by a "higher likelihood." If
you have one sample $X$ taken from one distribution $P$, and
all parameters of $P$ are known save the variance $\sigma^2$, and
you know that $\sigma^2\in\{\sigma^2_1,\sigma^2_2\}$, where $\sigma^2_1<\sigma^2_2$,
then the likelihood ratio $\frac{L(\sigma^2_1|X)}{L(\sigma^2_2|X)}\equiv \prod_{i=1}^{100} \frac{ p(x_i|\sigma^2_1)}{ p(x_i|\sigma^2_2)}$ reflects the evidence in favor of either of the two possible values of $\sigma^2$. A very high dispersion of $X$ will result in $L(\sigma^2_1|X)<L(\sigma^2_2|X)$, but this is a higher likelihood only in the sense that $\sigma_2^2$ is more likely than $\sigma_1^2$ to be the true value of $\sigma^2$. If the dispersion of $X$ is very low, of course, you will have $L(\sigma^2_1|X)>L(\sigma^2_2|X)$.
|
Does higher variance usually mean lower probability density?
|
Response to updated question: If I sample 100 data points from two distributions of the same type, but one with a lower variance and one with a higher variance, would the former one have higher likeli
|
Does higher variance usually mean lower probability density?
Response to updated question: If I sample 100 data points from two distributions of the same type, but one with a lower variance and one with a higher variance, would the former one have higher likelihood?
The likelihood $L(\theta|X)$ depends on both the parameter $\theta$ and the random sample $X$, so it's hard to know what is meant by a "higher likelihood." If
you have one sample $X$ taken from one distribution $P$, and
all parameters of $P$ are known save the variance $\sigma^2$, and
you know that $\sigma^2\in\{\sigma^2_1,\sigma^2_2\}$, where $\sigma^2_1<\sigma^2_2$,
then the likelihood ratio $\frac{L(\sigma^2_1|X)}{L(\sigma^2_2|X)}\equiv \prod_{i=1}^{100} \frac{ p(x_i|\sigma^2_1)}{ p(x_i|\sigma^2_2)}$ reflects the evidence in favor of either of the two possible values of $\sigma^2$. A very high dispersion of $X$ will result in $L(\sigma^2_1|X)<L(\sigma^2_2|X)$, but this is a higher likelihood only in the sense that $\sigma_2^2$ is more likely than $\sigma_1^2$ to be the true value of $\sigma^2$. If the dispersion of $X$ is very low, of course, you will have $L(\sigma^2_1|X)>L(\sigma^2_2|X)$.
|
Does higher variance usually mean lower probability density?
Response to updated question: If I sample 100 data points from two distributions of the same type, but one with a lower variance and one with a higher variance, would the former one have higher likeli
|
44,165
|
Does higher variance usually mean lower probability density?
|
"Probability density" is not a single number that can be lower or higher. It's a function $p(x)$ of $x$. Usually it also depends on some additional parameters $q_i$. For example, in the normal distribution, we have $q_1 = \mu$ and $q_2 = \sigma$.
To change variance, you would change these parameters. You could also change the function itself, if you hadn't said "same distribution". When you change those parameters, you will also alter $p(x)$, which may become higher or lower depending on the exact function. So no, it is not true that higher variance will always reduce probability density. For example, for a normal distribution, extreme values will have higher density.
To give a specific counter example, let's talk about the probability of getting $2 < x < 3$:
With a normal distribution $N(0, 1)$, $p(x > 2) = 0.023$ and $p(x > 3) = 0.001$ therefore $p(2 < x < 3) = 0.022$.
With a normal distribution $N(0, 4)$, $p(x > 2) = 0.158$ and $p(x > 3) = 0.067$ therefore $p(2 < x < 3) = 0.091$
In this case, the probability density between 2 and 3 has increased when variance went from 1 to 4.
I had to change your example a little, because this is trivial:
is it true that $P(\mathbf{X_1}; N(\mu_1, \sigma_1^2)) < P(\mathbf{X_2}; N(\mu_2, \sigma_2^2))$?
You've left too many independent variables, so I can say $X_1 = \mu_1$ and $X_2 = \mu_2 + 10 \sigma_2$ and then it is false, or vice versa and it is true.
|
Does higher variance usually mean lower probability density?
|
"Probability density" is not a single number that can be lower or higher. It's a function $p(x)$ of $x$. Usually it also depends on some additional parameters $q_i$. For example, in the normal distrib
|
Does higher variance usually mean lower probability density?
"Probability density" is not a single number that can be lower or higher. It's a function $p(x)$ of $x$. Usually it also depends on some additional parameters $q_i$. For example, in the normal distribution, we have $q_1 = \mu$ and $q_2 = \sigma$.
To change variance, you would change these parameters. You could also change the function itself, if you hadn't said "same distribution". When you change those parameters, you will also alter $p(x)$, which may become higher or lower depending on the exact function. So no, it is not true that higher variance will always reduce probability density. For example, for a normal distribution, extreme values will have higher density.
To give a specific counter example, let's talk about the probability of getting $2 < x < 3$:
With a normal distribution $N(0, 1)$, $p(x > 2) = 0.023$ and $p(x > 3) = 0.001$ therefore $p(2 < x < 3) = 0.022$.
With a normal distribution $N(0, 4)$, $p(x > 2) = 0.158$ and $p(x > 3) = 0.067$ therefore $p(2 < x < 3) = 0.091$
In this case, the probability density between 2 and 3 has increased when variance went from 1 to 4.
I had to change your example a little, because this is trivial:
is it true that $P(\mathbf{X_1}; N(\mu_1, \sigma_1^2)) < P(\mathbf{X_2}; N(\mu_2, \sigma_2^2))$?
You've left too many independent variables, so I can say $X_1 = \mu_1$ and $X_2 = \mu_2 + 10 \sigma_2$ and then it is false, or vice versa and it is true.
|
Does higher variance usually mean lower probability density?
"Probability density" is not a single number that can be lower or higher. It's a function $p(x)$ of $x$. Usually it also depends on some additional parameters $q_i$. For example, in the normal distrib
|
44,166
|
Is it worthwhile to use LASSO for variable selection even if it is cumbersome?
|
Variable selection is not compulsory. The idea that you have to throw away variables is wrong. Actually, unless there are strong reasons to throw away variables, don't do it! Use the full model and use the p-values to guide interpretation rather than throwing away information. An insignificant p-value doesn't mean that a variable should be removed, it only means that the data don't give you clear evidence that the coefficient is nonzero.
The Lasso mitigates some of the issues of doing variable selection by p-values, but as long as you don't feel the need to remove information, there's no need to do it, neither by Lasso.
Good reasons for selecting variables are:
The number of observations is critically low for the number of variables you have.
There are strong dependences between your variables ($X^TX$ is close to singular) and certain information is representated by several variables
The model is used to predict future observations and you are happy to get rid of some variables because it may be costly to observe them in the future.
Using cross-validation and the like you find that a model with fewer variables predicts better (if this happens, mostly one of 1 and 2 is also in place, but there are some further situations in which the original set of variables contains a lot of noise).
Variable selection is not required for interpretation! Note that even if there is no evidence (high p-value) that a certain variable has a nonzero coefficient, this variable may still improve the predictive power of the model. Removing the variable will set its coefficient to zero - don't forget that the estimator for it's coefficient in the full model is the "best guess" of the coefficient value that you have, significant or not, and therefore also better than zero in the sense of least squares.
By the way, regarding Lasso vs. variable selection by p-values (backward/forward/stepwise selection - never throw away all variables with large p-values in one go anyway): There are well known issues with variable selection by p-values and Lasso is often better, but not always. I've seen a good number of examples in which a model selected by backward or forward selection did better predicting on independent data than the Lasso. If you have enough observations to properly assess prediction quality using cross-validation and the like, you can compare different approaches and pick the best rather than always using Lasso.
|
Is it worthwhile to use LASSO for variable selection even if it is cumbersome?
|
Variable selection is not compulsory. The idea that you have to throw away variables is wrong. Actually, unless there are strong reasons to throw away variables, don't do it! Use the full model and us
|
Is it worthwhile to use LASSO for variable selection even if it is cumbersome?
Variable selection is not compulsory. The idea that you have to throw away variables is wrong. Actually, unless there are strong reasons to throw away variables, don't do it! Use the full model and use the p-values to guide interpretation rather than throwing away information. An insignificant p-value doesn't mean that a variable should be removed, it only means that the data don't give you clear evidence that the coefficient is nonzero.
The Lasso mitigates some of the issues of doing variable selection by p-values, but as long as you don't feel the need to remove information, there's no need to do it, neither by Lasso.
Good reasons for selecting variables are:
The number of observations is critically low for the number of variables you have.
There are strong dependences between your variables ($X^TX$ is close to singular) and certain information is representated by several variables
The model is used to predict future observations and you are happy to get rid of some variables because it may be costly to observe them in the future.
Using cross-validation and the like you find that a model with fewer variables predicts better (if this happens, mostly one of 1 and 2 is also in place, but there are some further situations in which the original set of variables contains a lot of noise).
Variable selection is not required for interpretation! Note that even if there is no evidence (high p-value) that a certain variable has a nonzero coefficient, this variable may still improve the predictive power of the model. Removing the variable will set its coefficient to zero - don't forget that the estimator for it's coefficient in the full model is the "best guess" of the coefficient value that you have, significant or not, and therefore also better than zero in the sense of least squares.
By the way, regarding Lasso vs. variable selection by p-values (backward/forward/stepwise selection - never throw away all variables with large p-values in one go anyway): There are well known issues with variable selection by p-values and Lasso is often better, but not always. I've seen a good number of examples in which a model selected by backward or forward selection did better predicting on independent data than the Lasso. If you have enough observations to properly assess prediction quality using cross-validation and the like, you can compare different approaches and pick the best rather than always using Lasso.
|
Is it worthwhile to use LASSO for variable selection even if it is cumbersome?
Variable selection is not compulsory. The idea that you have to throw away variables is wrong. Actually, unless there are strong reasons to throw away variables, don't do it! Use the full model and us
|
44,167
|
Is it worthwhile to use LASSO for variable selection even if it is cumbersome?
|
My goal is to understand which variables out of 10 actually explain my dependent variable for a voter/election research.
This is not variable selection!
With variable selection the goal is to reduce the size of a model
Such that it is easier to work with, like less computation intensive or requires less future efforts in sampling.
Such that it is less sensitive to noise. See 'bias variance' tradeoff.
Your goal is to understand a relationship. This can be done by making a plot and observe a straight line (or in the case of more variables the line is a plane and you look at statistics in a table instead of a figure because plotting 10-D is not easy in our 3-D physical world).
The role of the p-values is to filter the effects from the noise (like you would do in the 2-D, plot to tell if there is a straight line relationship, or just noise). The p-values indicate statistical significance. That means that you have observed an effect that is statistically measurable with your experiment. The other effects, with high p-value, might be present but are overshadowed by noise.
So in your case it seems fine to go with p-values. However, be aware of the pitfalls with p-values.
|
Is it worthwhile to use LASSO for variable selection even if it is cumbersome?
|
My goal is to understand which variables out of 10 actually explain my dependent variable for a voter/election research.
This is not variable selection!
With variable selection the goal is to reduce
|
Is it worthwhile to use LASSO for variable selection even if it is cumbersome?
My goal is to understand which variables out of 10 actually explain my dependent variable for a voter/election research.
This is not variable selection!
With variable selection the goal is to reduce the size of a model
Such that it is easier to work with, like less computation intensive or requires less future efforts in sampling.
Such that it is less sensitive to noise. See 'bias variance' tradeoff.
Your goal is to understand a relationship. This can be done by making a plot and observe a straight line (or in the case of more variables the line is a plane and you look at statistics in a table instead of a figure because plotting 10-D is not easy in our 3-D physical world).
The role of the p-values is to filter the effects from the noise (like you would do in the 2-D, plot to tell if there is a straight line relationship, or just noise). The p-values indicate statistical significance. That means that you have observed an effect that is statistically measurable with your experiment. The other effects, with high p-value, might be present but are overshadowed by noise.
So in your case it seems fine to go with p-values. However, be aware of the pitfalls with p-values.
|
Is it worthwhile to use LASSO for variable selection even if it is cumbersome?
My goal is to understand which variables out of 10 actually explain my dependent variable for a voter/election research.
This is not variable selection!
With variable selection the goal is to reduce
|
44,168
|
Is it worthwhile to use LASSO for variable selection even if it is cumbersome?
|
If its only the Feature Importance Aspect you are looking for , you can use RFE Method (Recursive Feature Elimination) as well.
But Lasso Regression takes care of Model not reaching Overfit or Underfit Situations as well as Feature Importance, by reducing all the Useless Features to Zero.
You can use Lasso Regression using GridSearchCV method, and then plot your Mean Train and Mean Test Errors accordingly, if you have different values of Regularization Parameters available. And for the best values of Errors, you can consider the value for the Regularization Parameter and fit your model with Train Data.
|
Is it worthwhile to use LASSO for variable selection even if it is cumbersome?
|
If its only the Feature Importance Aspect you are looking for , you can use RFE Method (Recursive Feature Elimination) as well.
But Lasso Regression takes care of Model not reaching Overfit or Underfi
|
Is it worthwhile to use LASSO for variable selection even if it is cumbersome?
If its only the Feature Importance Aspect you are looking for , you can use RFE Method (Recursive Feature Elimination) as well.
But Lasso Regression takes care of Model not reaching Overfit or Underfit Situations as well as Feature Importance, by reducing all the Useless Features to Zero.
You can use Lasso Regression using GridSearchCV method, and then plot your Mean Train and Mean Test Errors accordingly, if you have different values of Regularization Parameters available. And for the best values of Errors, you can consider the value for the Regularization Parameter and fit your model with Train Data.
|
Is it worthwhile to use LASSO for variable selection even if it is cumbersome?
If its only the Feature Importance Aspect you are looking for , you can use RFE Method (Recursive Feature Elimination) as well.
But Lasso Regression takes care of Model not reaching Overfit or Underfi
|
44,169
|
Is it worthwhile to use LASSO for variable selection even if it is cumbersome?
|
For any others that might be interested in the answer:
I managed to do the LASSO with an excel plugin into excel 365.
It did show significantly different results in comparison to stepwise.
Also, I found out that I actually want to build a causal model not a variable selection. (different terminology).
So yes LASSO seems to be worth the work.
|
Is it worthwhile to use LASSO for variable selection even if it is cumbersome?
|
For any others that might be interested in the answer:
I managed to do the LASSO with an excel plugin into excel 365.
It did show significantly different results in comparison to stepwise.
Also, I fou
|
Is it worthwhile to use LASSO for variable selection even if it is cumbersome?
For any others that might be interested in the answer:
I managed to do the LASSO with an excel plugin into excel 365.
It did show significantly different results in comparison to stepwise.
Also, I found out that I actually want to build a causal model not a variable selection. (different terminology).
So yes LASSO seems to be worth the work.
|
Is it worthwhile to use LASSO for variable selection even if it is cumbersome?
For any others that might be interested in the answer:
I managed to do the LASSO with an excel plugin into excel 365.
It did show significantly different results in comparison to stepwise.
Also, I fou
|
44,170
|
Can we just "pre-test" the backdoor criterion?
|
Indeed, given the DAG, you should only see a correlation between X and Z if there is a direct link between the two, and thus you could test for a correlation directly. These and similar tests are done by all causal discovery algorithms that automatically create the DAG from data, such as the PC algorithm.
However, from the practical perspective of a multiple regression, there are two caveats to this approach:
a) How do you perform the test? A n.s. p-value is not a proof that effect size is zero. Even if the CI is small and overlaps zero, you cannot exclude small confounding effects and thus you have to trade off a small possible bias (from not including a weak confounder) against the improved precision that you gain by having a model with fewer d.f. Plus, there is the issue of post-selection inference, i.e. you need to correct p-values for the tests performed in the model selection.
b) Second, you say that if a variable is not a confounder, it "does not need to be included". I would add to this: for reasons of bias (= causal perspective). However, there are other reasons to include a variable in a multiple regression. Most importantly, if Z is a strong predictor, i.e. if there is a strong link Z->Y, including Z in the regression can reduce the uncertainty on the estimate X->Y, even in the absence of a confounding link Z->X.
|
Can we just "pre-test" the backdoor criterion?
|
Indeed, given the DAG, you should only see a correlation between X and Z if there is a direct link between the two, and thus you could test for a correlation directly. These and similar tests are done
|
Can we just "pre-test" the backdoor criterion?
Indeed, given the DAG, you should only see a correlation between X and Z if there is a direct link between the two, and thus you could test for a correlation directly. These and similar tests are done by all causal discovery algorithms that automatically create the DAG from data, such as the PC algorithm.
However, from the practical perspective of a multiple regression, there are two caveats to this approach:
a) How do you perform the test? A n.s. p-value is not a proof that effect size is zero. Even if the CI is small and overlaps zero, you cannot exclude small confounding effects and thus you have to trade off a small possible bias (from not including a weak confounder) against the improved precision that you gain by having a model with fewer d.f. Plus, there is the issue of post-selection inference, i.e. you need to correct p-values for the tests performed in the model selection.
b) Second, you say that if a variable is not a confounder, it "does not need to be included". I would add to this: for reasons of bias (= causal perspective). However, there are other reasons to include a variable in a multiple regression. Most importantly, if Z is a strong predictor, i.e. if there is a strong link Z->Y, including Z in the regression can reduce the uncertainty on the estimate X->Y, even in the absence of a confounding link Z->X.
|
Can we just "pre-test" the backdoor criterion?
Indeed, given the DAG, you should only see a correlation between X and Z if there is a direct link between the two, and thus you could test for a correlation directly. These and similar tests are done
|
44,171
|
Can we just "pre-test" the backdoor criterion?
|
Yes, in order to confirm a confounding relationship you may perform a regression (or Chi-squared test or other suitable model) of $Z$ on $X$ and $Z$ on $Y$. This is exactly what I'm doing right now for a difference-in-differences healthcare analysis.
There may be confounders or colliders, measured or unmeasured, associated with $Z$ and $X$ or $Z$ and $Y$ that can cloud the true causal relationship between the variables. Therefore, it's important to flesh out the DAG prior to performing regression tests.
|
Can we just "pre-test" the backdoor criterion?
|
Yes, in order to confirm a confounding relationship you may perform a regression (or Chi-squared test or other suitable model) of $Z$ on $X$ and $Z$ on $Y$. This is exactly what I'm doing right now fo
|
Can we just "pre-test" the backdoor criterion?
Yes, in order to confirm a confounding relationship you may perform a regression (or Chi-squared test or other suitable model) of $Z$ on $X$ and $Z$ on $Y$. This is exactly what I'm doing right now for a difference-in-differences healthcare analysis.
There may be confounders or colliders, measured or unmeasured, associated with $Z$ and $X$ or $Z$ and $Y$ that can cloud the true causal relationship between the variables. Therefore, it's important to flesh out the DAG prior to performing regression tests.
|
Can we just "pre-test" the backdoor criterion?
Yes, in order to confirm a confounding relationship you may perform a regression (or Chi-squared test or other suitable model) of $Z$ on $X$ and $Z$ on $Y$. This is exactly what I'm doing right now fo
|
44,172
|
Alternative formula for the Bernoulli pmf?
|
Your alternative form is often written braced form as
$$
f(x)=\begin{cases} p & \text{if $x=1$} \\
1-p & \text{if $x=0$}
\end{cases} $$
and there is nothing wrong with that. It might be useful, for instance, for programming and for elementary exposition.
But if you want to do any form of algebra or calculus, it is inconvenient, so the other form is preferred. But both forms are equally valid, it is only a pragmatic question of what works best for whatever you are doing with it.
|
Alternative formula for the Bernoulli pmf?
|
Your alternative form is often written braced form as
$$
f(x)=\begin{cases} p & \text{if $x=1$} \\
1-p & \text{if $x=0$}
\end{cases} $$
and there is nothing wrong with that. I
|
Alternative formula for the Bernoulli pmf?
Your alternative form is often written braced form as
$$
f(x)=\begin{cases} p & \text{if $x=1$} \\
1-p & \text{if $x=0$}
\end{cases} $$
and there is nothing wrong with that. It might be useful, for instance, for programming and for elementary exposition.
But if you want to do any form of algebra or calculus, it is inconvenient, so the other form is preferred. But both forms are equally valid, it is only a pragmatic question of what works best for whatever you are doing with it.
|
Alternative formula for the Bernoulli pmf?
Your alternative form is often written braced form as
$$
f(x)=\begin{cases} p & \text{if $x=1$} \\
1-p & \text{if $x=0$}
\end{cases} $$
and there is nothing wrong with that. I
|
44,173
|
Alternative formula for the Bernoulli pmf?
|
There's nothing wrong with it as it evaluates the values, it should evaluate. The usual formulation however uses powers so it becomes a case of binomial distribution with $n=1$ sample size. Recall that the probability mass function of binomial distribution is
$$
{n \choose x} \,p^x (1-p)^{n-x}
$$
Where we have $n$ independent Bernoulli distributed with $x$ successes observed, hence $p^x$, and $n-x$ failures. The ${n \choose x}$ corrects for the fact that we want to account for the successes and failures appearing in any possible combination. With $n=1$ and $x \in \{0,1\}$ it reduces to Bernoulli distribution formulated with the powers
$$
\,p^x (1-p)^{1-x}
$$
|
Alternative formula for the Bernoulli pmf?
|
There's nothing wrong with it as it evaluates the values, it should evaluate. The usual formulation however uses powers so it becomes a case of binomial distribution with $n=1$ sample size. Recall tha
|
Alternative formula for the Bernoulli pmf?
There's nothing wrong with it as it evaluates the values, it should evaluate. The usual formulation however uses powers so it becomes a case of binomial distribution with $n=1$ sample size. Recall that the probability mass function of binomial distribution is
$$
{n \choose x} \,p^x (1-p)^{n-x}
$$
Where we have $n$ independent Bernoulli distributed with $x$ successes observed, hence $p^x$, and $n-x$ failures. The ${n \choose x}$ corrects for the fact that we want to account for the successes and failures appearing in any possible combination. With $n=1$ and $x \in \{0,1\}$ it reduces to Bernoulli distribution formulated with the powers
$$
\,p^x (1-p)^{1-x}
$$
|
Alternative formula for the Bernoulli pmf?
There's nothing wrong with it as it evaluates the values, it should evaluate. The usual formulation however uses powers so it becomes a case of binomial distribution with $n=1$ sample size. Recall tha
|
44,174
|
Alternative formula for the Bernoulli pmf?
|
This is fine, assuming that the domain of $f$ is $\{0,1\}$.
This is also true of the formulations in the other answers.
A different formulation involving the Iverson bracket is
\begin{align*}
f(x) = (1-p)[x=0]+p[x=1].\tag{1}
\end{align*}
One defines
\begin{align*}
[P] &= \begin{cases}
1, & \textrm{if $P$ is true}, \\
0, & \textrm{else.}
\end{cases}
\end{align*}
The Iverson bracket has many interesting algebraic properties that capture the logic of the statement $P$. One interesting difference with formulation (1) is that the domain of $f$ can be thought of as all of $\mathbb{Z}$ (or indeed all of $\mathbb{R}$).
This allows one to be loose with notation in a way that is totally rigorous.
For example, the expected value of $g(x)$ is
$$\sum_x g(x)f(x),$$
where the sum is typically understood to be over all of $\mathbb{Z}$, with the result
$$g(0)(1-p) + g(1)p.$$
|
Alternative formula for the Bernoulli pmf?
|
This is fine, assuming that the domain of $f$ is $\{0,1\}$.
This is also true of the formulations in the other answers.
A different formulation involving the Iverson bracket is
\begin{align*}
f(x) = (
|
Alternative formula for the Bernoulli pmf?
This is fine, assuming that the domain of $f$ is $\{0,1\}$.
This is also true of the formulations in the other answers.
A different formulation involving the Iverson bracket is
\begin{align*}
f(x) = (1-p)[x=0]+p[x=1].\tag{1}
\end{align*}
One defines
\begin{align*}
[P] &= \begin{cases}
1, & \textrm{if $P$ is true}, \\
0, & \textrm{else.}
\end{cases}
\end{align*}
The Iverson bracket has many interesting algebraic properties that capture the logic of the statement $P$. One interesting difference with formulation (1) is that the domain of $f$ can be thought of as all of $\mathbb{Z}$ (or indeed all of $\mathbb{R}$).
This allows one to be loose with notation in a way that is totally rigorous.
For example, the expected value of $g(x)$ is
$$\sum_x g(x)f(x),$$
where the sum is typically understood to be over all of $\mathbb{Z}$, with the result
$$g(0)(1-p) + g(1)p.$$
|
Alternative formula for the Bernoulli pmf?
This is fine, assuming that the domain of $f$ is $\{0,1\}$.
This is also true of the formulations in the other answers.
A different formulation involving the Iverson bracket is
\begin{align*}
f(x) = (
|
44,175
|
A description of the mean of the Geometric Distribution - is it unorthodox or just incorrect?
|
$\exp(\mathbb E[\log(X)])$
is the geometric mean of a positive random variable $X$
not the mean of a geometric random variable.
So either the homework directions put the words in the wrong order, or you transcribed them incorrectly
|
A description of the mean of the Geometric Distribution - is it unorthodox or just incorrect?
|
$\exp(\mathbb E[\log(X)])$
is the geometric mean of a positive random variable $X$
not the mean of a geometric random variable.
So either the homework directions put the words in the wrong order, or y
|
A description of the mean of the Geometric Distribution - is it unorthodox or just incorrect?
$\exp(\mathbb E[\log(X)])$
is the geometric mean of a positive random variable $X$
not the mean of a geometric random variable.
So either the homework directions put the words in the wrong order, or you transcribed them incorrectly
|
A description of the mean of the Geometric Distribution - is it unorthodox or just incorrect?
$\exp(\mathbb E[\log(X)])$
is the geometric mean of a positive random variable $X$
not the mean of a geometric random variable.
So either the homework directions put the words in the wrong order, or y
|
44,176
|
How to answer critiques about the inapplicability of the framework of frequentist statistics to the real world?
|
As to the first critique, it could be a critique of any and all branches of the sciences. There are no perfectly repeatable experiments. It isn't really possible to completely control any experiment. A meteor could strike the location of the experiment, for example.
Also, the ability to repeat an experiment is irrelevant. Most Frequentist inferences are of the form $\Pr(t(x)|\theta)$. Frequencies in that framework are a limiting form. If the model is true, then a p-value has meaning. The better critique would be "what happens when the model is not true?" That is a good critique because your null is usually the opposite of what you really believe to be true.
Frequentist frequencies are not probabilities in the colloquial sense. They are probabilities that provide guarantees. Except for exact tests, when you read that $p<.05$ it really does not mean that $p=.05$. It just guarantees that the false positive rate, if the null is true, will not exceed five percent over an infinite number of repetitions. However, as the number of repetitions becomes large enough, it will tend to converge.
It is true that one cannot do an infinite number of repetitions and it may be 100% wrong if you only do one sample. Nonetheless, it does provide a sensible way to make inferences and decisions. It allows you a method to control how often you will be made a fool of. It does not allow you to say this time is not the time I will be a fool.
The difficulty isn't in the math or the use but in the human need for there to be no false positives or negatives. The problem is in the human need for statistical significance to map perfectly to something being true and a lack of significance to something being false.
The second critique is a valid critique of any probabilistic methodology. It is probably true that Bayesian methods handle this critique a bit better because of the logic behind the construction of Bayesian methods.
If you need to be purist about that, then one could restrict the use of Frequentist methods to those cases where there truly is no prior knowledge or where the true null hypothesis of interest is a sharp null.
Let me illustrate the point.
You have a U.S. quarter that you are going to toss 50,000 times in a specially made vacuum chamber with a carefully constructed coin tossing machine. You want to determine if the coin is fair.
Even if you believe the coins to be "roughly fair", it is reasonable to discard that belief unless there really has been a controlled study of the fairness of U.S. coins. As a side note, a group of engineering students has done such a study.
The toss is totally deterministic and highly controlled. It is unclear how a Frequentist methodology would be disadvantaged here.
Now let us redo the experiment a bit.
Let us pretend that you and I are going to gamble money on the fairness of the coin. Indeed, I believe that the coin is so unfair that it will come up heads ten times in a row. You believe it is a fair coin. Prior to gambling any money, we will do a pilot study and have a third party toss the coin ten times. It comes up heads six out of ten times.
So I ask that you ante up 500:1 odds. I will toss the coin ten times.
Just before you do that a friend whispers in your ear that I apprenticed under my uncle who was a stage magician. Also, you are told that I was arrested, but not convicted, as working a number of street games like three-card Monte and coin games under the pseudonym Slick Eddy. Charges were dropped because, although I may have borne a striking resemblance to the alleged perpetrator, nobody was willing to come forward to identify me in a police lineup.
Wouldn't you prefer to incorporate that information with a Bayesian prior?
It is true that there is no such thing as a random coin toss. Any physicist, magician, or conman will tell you the same thing.
The Frequentist method would tell you that the entire procedure was not fair, after the fact, but it wouldn't allow you to incorporate all outside information. The Frequentist method is perfectly accurate by construction, but intrinsically less precise in the resulting estimators in this case.
The second argument is fittedness to purpose. Frequentist statistics are not a universal cure for all that ails mankind. They are a tool in a toolkit.
Let us flip the above example upside down.
Imagine that you truly do not possess any outside knowledge on something that you really must make a decision on. You do have the ability to collect a sample and you can use Bayesian or Frequentist statistics.
Frequentist statistics minimize the maximum amount of risk that you will be required to take. Bayesian methods do not. Frequentist methods, despite having no background information, provide guaranteed performance levels. In a state of true ignorance, that is a valuable thing to have. The Bayesian method cannot do that.
|
How to answer critiques about the inapplicability of the framework of frequentist statistics to the
|
As to the first critique, it could be a critique of any and all branches of the sciences. There are no perfectly repeatable experiments. It isn't really possible to completely control any experiment
|
How to answer critiques about the inapplicability of the framework of frequentist statistics to the real world?
As to the first critique, it could be a critique of any and all branches of the sciences. There are no perfectly repeatable experiments. It isn't really possible to completely control any experiment. A meteor could strike the location of the experiment, for example.
Also, the ability to repeat an experiment is irrelevant. Most Frequentist inferences are of the form $\Pr(t(x)|\theta)$. Frequencies in that framework are a limiting form. If the model is true, then a p-value has meaning. The better critique would be "what happens when the model is not true?" That is a good critique because your null is usually the opposite of what you really believe to be true.
Frequentist frequencies are not probabilities in the colloquial sense. They are probabilities that provide guarantees. Except for exact tests, when you read that $p<.05$ it really does not mean that $p=.05$. It just guarantees that the false positive rate, if the null is true, will not exceed five percent over an infinite number of repetitions. However, as the number of repetitions becomes large enough, it will tend to converge.
It is true that one cannot do an infinite number of repetitions and it may be 100% wrong if you only do one sample. Nonetheless, it does provide a sensible way to make inferences and decisions. It allows you a method to control how often you will be made a fool of. It does not allow you to say this time is not the time I will be a fool.
The difficulty isn't in the math or the use but in the human need for there to be no false positives or negatives. The problem is in the human need for statistical significance to map perfectly to something being true and a lack of significance to something being false.
The second critique is a valid critique of any probabilistic methodology. It is probably true that Bayesian methods handle this critique a bit better because of the logic behind the construction of Bayesian methods.
If you need to be purist about that, then one could restrict the use of Frequentist methods to those cases where there truly is no prior knowledge or where the true null hypothesis of interest is a sharp null.
Let me illustrate the point.
You have a U.S. quarter that you are going to toss 50,000 times in a specially made vacuum chamber with a carefully constructed coin tossing machine. You want to determine if the coin is fair.
Even if you believe the coins to be "roughly fair", it is reasonable to discard that belief unless there really has been a controlled study of the fairness of U.S. coins. As a side note, a group of engineering students has done such a study.
The toss is totally deterministic and highly controlled. It is unclear how a Frequentist methodology would be disadvantaged here.
Now let us redo the experiment a bit.
Let us pretend that you and I are going to gamble money on the fairness of the coin. Indeed, I believe that the coin is so unfair that it will come up heads ten times in a row. You believe it is a fair coin. Prior to gambling any money, we will do a pilot study and have a third party toss the coin ten times. It comes up heads six out of ten times.
So I ask that you ante up 500:1 odds. I will toss the coin ten times.
Just before you do that a friend whispers in your ear that I apprenticed under my uncle who was a stage magician. Also, you are told that I was arrested, but not convicted, as working a number of street games like three-card Monte and coin games under the pseudonym Slick Eddy. Charges were dropped because, although I may have borne a striking resemblance to the alleged perpetrator, nobody was willing to come forward to identify me in a police lineup.
Wouldn't you prefer to incorporate that information with a Bayesian prior?
It is true that there is no such thing as a random coin toss. Any physicist, magician, or conman will tell you the same thing.
The Frequentist method would tell you that the entire procedure was not fair, after the fact, but it wouldn't allow you to incorporate all outside information. The Frequentist method is perfectly accurate by construction, but intrinsically less precise in the resulting estimators in this case.
The second argument is fittedness to purpose. Frequentist statistics are not a universal cure for all that ails mankind. They are a tool in a toolkit.
Let us flip the above example upside down.
Imagine that you truly do not possess any outside knowledge on something that you really must make a decision on. You do have the ability to collect a sample and you can use Bayesian or Frequentist statistics.
Frequentist statistics minimize the maximum amount of risk that you will be required to take. Bayesian methods do not. Frequentist methods, despite having no background information, provide guaranteed performance levels. In a state of true ignorance, that is a valuable thing to have. The Bayesian method cannot do that.
|
How to answer critiques about the inapplicability of the framework of frequentist statistics to the
As to the first critique, it could be a critique of any and all branches of the sciences. There are no perfectly repeatable experiments. It isn't really possible to completely control any experiment
|
44,177
|
How to answer critiques about the inapplicability of the framework of frequentist statistics to the real world?
|
I think the issue with the arguments raised in the question is the naive realist philosophy of models apparently behind it.
If we model an experiment in a frequentist manner, what we do is that, when using the model, we treat the experiment as if it would be infinitely repeatable, with random outcomes the relative frequency of which stabilises for a growing number of observations.
The stated arguments seem to imply that this is only appropriate if the experiment really and objectively is of this kind. But a model is an idealisation. It seems quite clear that involving the exact physics of coin tossing would be a pointless effort when making predictions regarding, for example, how many heads you will observe in 1000 tosses. This is very easily possible assuming an i.i.d. frequentist model. Now obviously there is no guarantee that reality behaves like what is stated in the model. This can however (at least to some extent) be checked empirically, for example using the runs test to see whether sequences of heads and tails deviate from what is expected under independence. What can be validated in this way is not the truth of the model, but its fitness for the task for which it is used.
The model can in this way be used without requiring that what is formalised by the model is true in a naive realist sense. This may work well or not; we should not forget that we're dealing with an idealisation and we're making assumptions that may affect our conclusions from the model. Therefore the model and its assumptions need to be critically discussed, using knowledge of the situation as well as empirical checks, and rejected or updated if required. Sometimes the best use of the model is to enable the researcher to learn in which way it is violated.
Note that concepts such as "independence" and "identical repetition" are ultimately human constructs. Assuming "independence" means something like "any conceivable source of dependence is deemed unimportant by the observer", "identical repetition" means that "the observer perceives the repetitions as not different in any relevant respect". This involves judgements of the observer that can be challenged, discussed, and sometimes empirically falsified. The observer themselves may only make such judgements in a tentative fashion, being open to learn and adjust in case of falsification or strong doubts raised.
Another remark on model assumptions: Let's say we are interested in estimating a certain real quantity, and we have observations related to it. Assuming a certain frequentist model (such as "data i.i.d. exponentially distributed with unknown parameter") and identifying a parameter with what we are interested in in reality allows us to derive an estimator that is in a well defined sense optimal in the model framework. So we may use this estimator to estimate the quantity of interest in reality. Although the guaranteed optimality of the estimator requires the model to be true, putting up a model like this is a clever way to motivate a reasonable estimator in reality, and even giving an indication about the uncertainty using, say, a confidence interval, even without any guarantee that the model is true. What the model has done here is giving us a rationale, an idea, for what to do, and we can think of this as making sense as long as there are no specific objections against that model. As long as we don't have a better model, it is hard to argue that we can do better than that (although I'd find it desirable to interpret results acknowledging that the reason for using the model is not that we knew that it's true).
|
How to answer critiques about the inapplicability of the framework of frequentist statistics to the
|
I think the issue with the arguments raised in the question is the naive realist philosophy of models apparently behind it.
If we model an experiment in a frequentist manner, what we do is that, when
|
How to answer critiques about the inapplicability of the framework of frequentist statistics to the real world?
I think the issue with the arguments raised in the question is the naive realist philosophy of models apparently behind it.
If we model an experiment in a frequentist manner, what we do is that, when using the model, we treat the experiment as if it would be infinitely repeatable, with random outcomes the relative frequency of which stabilises for a growing number of observations.
The stated arguments seem to imply that this is only appropriate if the experiment really and objectively is of this kind. But a model is an idealisation. It seems quite clear that involving the exact physics of coin tossing would be a pointless effort when making predictions regarding, for example, how many heads you will observe in 1000 tosses. This is very easily possible assuming an i.i.d. frequentist model. Now obviously there is no guarantee that reality behaves like what is stated in the model. This can however (at least to some extent) be checked empirically, for example using the runs test to see whether sequences of heads and tails deviate from what is expected under independence. What can be validated in this way is not the truth of the model, but its fitness for the task for which it is used.
The model can in this way be used without requiring that what is formalised by the model is true in a naive realist sense. This may work well or not; we should not forget that we're dealing with an idealisation and we're making assumptions that may affect our conclusions from the model. Therefore the model and its assumptions need to be critically discussed, using knowledge of the situation as well as empirical checks, and rejected or updated if required. Sometimes the best use of the model is to enable the researcher to learn in which way it is violated.
Note that concepts such as "independence" and "identical repetition" are ultimately human constructs. Assuming "independence" means something like "any conceivable source of dependence is deemed unimportant by the observer", "identical repetition" means that "the observer perceives the repetitions as not different in any relevant respect". This involves judgements of the observer that can be challenged, discussed, and sometimes empirically falsified. The observer themselves may only make such judgements in a tentative fashion, being open to learn and adjust in case of falsification or strong doubts raised.
Another remark on model assumptions: Let's say we are interested in estimating a certain real quantity, and we have observations related to it. Assuming a certain frequentist model (such as "data i.i.d. exponentially distributed with unknown parameter") and identifying a parameter with what we are interested in in reality allows us to derive an estimator that is in a well defined sense optimal in the model framework. So we may use this estimator to estimate the quantity of interest in reality. Although the guaranteed optimality of the estimator requires the model to be true, putting up a model like this is a clever way to motivate a reasonable estimator in reality, and even giving an indication about the uncertainty using, say, a confidence interval, even without any guarantee that the model is true. What the model has done here is giving us a rationale, an idea, for what to do, and we can think of this as making sense as long as there are no specific objections against that model. As long as we don't have a better model, it is hard to argue that we can do better than that (although I'd find it desirable to interpret results acknowledging that the reason for using the model is not that we knew that it's true).
|
How to answer critiques about the inapplicability of the framework of frequentist statistics to the
I think the issue with the arguments raised in the question is the naive realist philosophy of models apparently behind it.
If we model an experiment in a frequentist manner, what we do is that, when
|
44,178
|
How to answer critiques about the inapplicability of the framework of frequentist statistics to the real world?
|
Having read the blog post, I think the author is saying that we shouldn't use randomness in models of the real world because the real world is not random, since everything (such as a coin flip) actually has a cause.
This makes probability theory the science of last resort. Only after
truly exhausting your ability to investigate causal factors and
processes should you indulge in probabilistic thinking. Doing
otherwise is a cop-out, one that dangerously feels “scientific.”
However, I do not agree with this. I would say that probability is simply a way of quantifying what you don't know, or even just stating that you don't know something.
Treating something as random is the same as saying that you don't know it.
So if you insist (as some people do) on never using probability theory, then you are assuming that you know everything relevant to the problem under investigation, which seems to me to be even more of a cop-out than straight-up admitting that you don't know some things.
Whether your lack of knowledge is quantified as "what if things had gone some other way?" (frequentist) or "I'm not sure about the underlying state of the world in the first place" (Bayesian) isn't so important. What is important is that, whatever answer you come up with, you shouldn't be certain that it's correct. That's just crazy!
As Lewian says in the answer above, "All models are wrong, but some are useful" also applies to probability itself. Expressing uncertainty in terms of probabilities is a model of the real world, and it's often useful.
|
How to answer critiques about the inapplicability of the framework of frequentist statistics to the
|
Having read the blog post, I think the author is saying that we shouldn't use randomness in models of the real world because the real world is not random, since everything (such as a coin flip) actual
|
How to answer critiques about the inapplicability of the framework of frequentist statistics to the real world?
Having read the blog post, I think the author is saying that we shouldn't use randomness in models of the real world because the real world is not random, since everything (such as a coin flip) actually has a cause.
This makes probability theory the science of last resort. Only after
truly exhausting your ability to investigate causal factors and
processes should you indulge in probabilistic thinking. Doing
otherwise is a cop-out, one that dangerously feels “scientific.”
However, I do not agree with this. I would say that probability is simply a way of quantifying what you don't know, or even just stating that you don't know something.
Treating something as random is the same as saying that you don't know it.
So if you insist (as some people do) on never using probability theory, then you are assuming that you know everything relevant to the problem under investigation, which seems to me to be even more of a cop-out than straight-up admitting that you don't know some things.
Whether your lack of knowledge is quantified as "what if things had gone some other way?" (frequentist) or "I'm not sure about the underlying state of the world in the first place" (Bayesian) isn't so important. What is important is that, whatever answer you come up with, you shouldn't be certain that it's correct. That's just crazy!
As Lewian says in the answer above, "All models are wrong, but some are useful" also applies to probability itself. Expressing uncertainty in terms of probabilities is a model of the real world, and it's often useful.
|
How to answer critiques about the inapplicability of the framework of frequentist statistics to the
Having read the blog post, I think the author is saying that we shouldn't use randomness in models of the real world because the real world is not random, since everything (such as a coin flip) actual
|
44,179
|
Elementary explanation of Gaussian Processes
|
A stochastic process $X(t), t \in T$ is a Gaussian process (GP) if $\sum_i a_i X(t_i)$ is a Gaussian random variable for any such linear combination. Equivalently, it is a GP if all its finite-dimensional distributions are (multivariate) Gaussian, that is $(X(t_1),X(t_2),\dots,X(t_n))$ is Gaussian for any choice of $\{t_i\}$. Usually, one in addition requires that the sample path $t \mapsto X(t)$ be continuous. This way you can view a GP as a random function in $C(T)$ the space of all continuous functions on $T$.
A GP is characterized by a mean function $\mu: T \to \mathbb R$ and a covariance kernel $K$ (a positive semi definite function from $T \times T$ to $\mathbb R$) with the property that $\mathbb E [X(t)] = \mu(t)$ for all $t \in T$ and
$$\text{cov}\big(\mathbf X) = (K(t_i,t_j))_{i,j=1}^n$$ where $\mathbf X = (X(t_1),X(t_2),\dots,X(t_n))$ and this holds for any choice of $\{t_1,\dots,t_n\} \subset T$ and any $n \ge 1$. Note that $\mu$ and $K$ effectively specify all those finite-dimensional normal distributions we talked about before.
This is all good, but what does a GP look like? I mean, how do we even sample from a GP and what do the functions we sample from look like? How can we output an entire function every time we sample a single element from GP?
Representation
There is a very powerful result that allows us to give a nice answer to this question. To make it simple, I am not going to worry about mathematical rigor that much (maybe a little bit!). Here are a couple of observations:
Associated to every kernel function $K(\cdot,\cdot)$, there is a unique reproducing kernel Hilbert space (RKHS), call it $H$. Let's not worry what RKHS means, it is just a deterministic space of "nice" functions. We can view $H$ as a subset of the $L^2$ space of functions on $T$.
Thus associated to every GP, there is an RKHS determined by its covariance kernel. It turns out that the closure of $H$ (in $C(T)$) is the support of the GP, that is, $P( X \in \bar H) = 1$. We will never observe anything outside $\bar H$ when we sample from the GP.
Let $\{\phi_j\}_{j \in \mathbb N}$ be a complete orthonormal system of eigenfunctions of the kernel matrix $K(\cdot,\cdot)$ (in $L^2$) and $\{\lambda_j\}_{j \in \mathbb N}$ the corresponding nonzero eigenvalues. We assume they are ordered as follows:
$$
\lambda_1 \ge \lambda_2 \ge \lambda_3 \ge ...
$$
and they are all nonnegative (a consequence of positive semi-definiteness.) If $H$ is infinite-dimensional, this sequence will also be infinite.
You might have heard of the Mercer's theorem which gives an expansion of the kernel in terms of these:
$$
K(s,t) = \sum_{j=1}^\infty \lambda_j \phi_j(s) \phi_j(t).
$$
It turns out that the associated GP also has an expansion in terms of those eigenfunctions: Then, almost surely
$$
X(t) = \mu(t) + \sum_{j=1}^\infty \sqrt{\lambda_j} g_j \phi_j(t). \quad (*)
$$
where $g_i \sim N(0,1), i =1,2,3,\dots$ i.i.d. standard normal variables.
This is called a Karhunen–Loève expansion. Since $\lambda_j \to 0$ as $j \to \infty$, you can truncate this series to some large value of $N$ and get a good approximation. Basically, a GP is a linear combination of these eigenfunctions with random weights $\sqrt{\lambda_j} g_j$ (which are drawn independently from a Gaussian distribution with zero mean and variance $\lambda_j$). You can think of these eigenfunctions as the directions of variation of the GP. There is more variability along $\phi_1$ followed by $\phi_2$ and so on. (Think of it like PCA in infinite dimensions.)
Representation (*) is good enough for understanding and you can just stop here. But the story doesn't end here. See the end of this post. To practically sample from the GP, find $\{\phi_j\}$ and $\lambda_j$, draw $\{g_j\}$ as i.i.d. standard normal variables and form $X(t) \approx \sum_{j=1}^N \sqrt{\lambda_j} g_j \phi_j(t)$ for some large enough $N$.
Example
Consider the Brownian motion on $[0,1]$ which is a centered sample-path-continuous GP with covariance matrix $\mathbb E [X(t) X(s)] = \min\{t,s\}$ for all $s,t \in [0,1]$. The eigenvalues and eigenfunctions are given by (see Example 12.23 in Wainwright's book)
$$
\phi_j(t) = \sin \frac{(2j-1)\pi t}{2}, \quad \lambda_j = \Big( \frac{2}{(2j-1)\pi}\Big)^2.
$$
Note that $\lambda_j \to 0$ as $j \to \infty$. Then, if you believe the K-L result, we can give a explicit construction of the Brownian motion as follows:
$$
X(t) = \sum_{j=1}^\infty \frac{2 g_j}{(2j-1)\pi} \, \sin \left( \frac{(2j-1)\pi t}{2}\right) = \sum_{k \; \text{odd}} \frac{2 g_k}{k\pi} \, \sin \left( \frac{k\pi t}{2}\right).
$$
where $g_1, g_2, g_3, \dots \stackrel{\text{i.i.d.}}{\sim} N(0,1)$.
Here is some code to draw two samples from this random function:
tvec = seq(0,1, length.out = 1000)
N = 500
set.seed(125)
g1 = rnorm(N)
g2 = rnorm(N)
X_realization = function(tvec, g) {
sapply(tvec, function(t) sum( sapply(1:N, function(j) 2*g[j]*sin((2*j-1)*pi*t/2)/(pi*(2*j-1)))) )
}
Xs1 = X_realization(tvec, g1)
Xs2 = X_realization(tvec, g2)
yrange = range(cbind(Xs1, Xs2))
plot(tvec, Xs1, type="l", ylim = yrange, main = "Two realizations of a Brownian motion", xlab = "t", ylab = "X(t)")
lines(tvec, Xs2, col="red")
And the resulting plot:
The rest of the story
It turns out that any complete orthonormal system of $H$ will do. So if you pick any sequence of functions $\{h_j\}$ that are orthonormal in $H$ and their closure spans the entire $H$, then, almost surely
$$
X(t) = \sum_{j=1}^\infty g_j h_j,
$$
where $\{g_j\}$ is some i.i.d. $N(0,1)$ sequence. This gives much flexibility in representing a GP and some basis might work better than the other in a specific application. (See Theorem 2.6.10 in Giné and Nickl's book).
In finite dimensions
EDIT: Since this post got some attention, let me add this too. The K-L expansion mentioned above is not unfamiliar to you if you have thought about how to sample a general multivariate Guassian vector $\mathbf x \sim N(\mu, \Sigma)$. What you would generally do is to write $\mathbf x = (x_i) = \mu + \Sigma^{1/2} \mathbf z \in \mathbb R^n$ where $\mathbf z \sim N(0,I)$ that is $\mathbf z = (z_i)$ with $z_1,\dots,z_n$ i.i.d. $\sim N(0,1)$ and $\Sigma^{1/2}$ is the matrix square root of $\Sigma$.
Another closely related approach is to perform eigen-decompisition on $\Sigma = U \Lambda U^T$ where $U = [\mathbf u_1 \mid \mathbf u_2 \mid \cdots \mid \mathbf u_n]$ is an orthogonal matrix whose columns are the eigenvectors of $\Sigma$ and $\Lambda = \text{diag}(\lambda_i)$ is the diagonal matrix of the corresponding eigenvalues. Then, we can generate $\mathbf x = \mu + U \Lambda^{1/2} \mathbf z$ (verify that this has the right distribution!). If you expand this equation you get
$$
\mathbf x = \mu + \sum_{i=1}^n \sqrt{\lambda_i} z_i \mathbf u_i
$$
which is just the finite-dimensional analog of the K-L expansion.
|
Elementary explanation of Gaussian Processes
|
A stochastic process $X(t), t \in T$ is a Gaussian process (GP) if $\sum_i a_i X(t_i)$ is a Gaussian random variable for any such linear combination. Equivalently, it is a GP if all its finite-dimensi
|
Elementary explanation of Gaussian Processes
A stochastic process $X(t), t \in T$ is a Gaussian process (GP) if $\sum_i a_i X(t_i)$ is a Gaussian random variable for any such linear combination. Equivalently, it is a GP if all its finite-dimensional distributions are (multivariate) Gaussian, that is $(X(t_1),X(t_2),\dots,X(t_n))$ is Gaussian for any choice of $\{t_i\}$. Usually, one in addition requires that the sample path $t \mapsto X(t)$ be continuous. This way you can view a GP as a random function in $C(T)$ the space of all continuous functions on $T$.
A GP is characterized by a mean function $\mu: T \to \mathbb R$ and a covariance kernel $K$ (a positive semi definite function from $T \times T$ to $\mathbb R$) with the property that $\mathbb E [X(t)] = \mu(t)$ for all $t \in T$ and
$$\text{cov}\big(\mathbf X) = (K(t_i,t_j))_{i,j=1}^n$$ where $\mathbf X = (X(t_1),X(t_2),\dots,X(t_n))$ and this holds for any choice of $\{t_1,\dots,t_n\} \subset T$ and any $n \ge 1$. Note that $\mu$ and $K$ effectively specify all those finite-dimensional normal distributions we talked about before.
This is all good, but what does a GP look like? I mean, how do we even sample from a GP and what do the functions we sample from look like? How can we output an entire function every time we sample a single element from GP?
Representation
There is a very powerful result that allows us to give a nice answer to this question. To make it simple, I am not going to worry about mathematical rigor that much (maybe a little bit!). Here are a couple of observations:
Associated to every kernel function $K(\cdot,\cdot)$, there is a unique reproducing kernel Hilbert space (RKHS), call it $H$. Let's not worry what RKHS means, it is just a deterministic space of "nice" functions. We can view $H$ as a subset of the $L^2$ space of functions on $T$.
Thus associated to every GP, there is an RKHS determined by its covariance kernel. It turns out that the closure of $H$ (in $C(T)$) is the support of the GP, that is, $P( X \in \bar H) = 1$. We will never observe anything outside $\bar H$ when we sample from the GP.
Let $\{\phi_j\}_{j \in \mathbb N}$ be a complete orthonormal system of eigenfunctions of the kernel matrix $K(\cdot,\cdot)$ (in $L^2$) and $\{\lambda_j\}_{j \in \mathbb N}$ the corresponding nonzero eigenvalues. We assume they are ordered as follows:
$$
\lambda_1 \ge \lambda_2 \ge \lambda_3 \ge ...
$$
and they are all nonnegative (a consequence of positive semi-definiteness.) If $H$ is infinite-dimensional, this sequence will also be infinite.
You might have heard of the Mercer's theorem which gives an expansion of the kernel in terms of these:
$$
K(s,t) = \sum_{j=1}^\infty \lambda_j \phi_j(s) \phi_j(t).
$$
It turns out that the associated GP also has an expansion in terms of those eigenfunctions: Then, almost surely
$$
X(t) = \mu(t) + \sum_{j=1}^\infty \sqrt{\lambda_j} g_j \phi_j(t). \quad (*)
$$
where $g_i \sim N(0,1), i =1,2,3,\dots$ i.i.d. standard normal variables.
This is called a Karhunen–Loève expansion. Since $\lambda_j \to 0$ as $j \to \infty$, you can truncate this series to some large value of $N$ and get a good approximation. Basically, a GP is a linear combination of these eigenfunctions with random weights $\sqrt{\lambda_j} g_j$ (which are drawn independently from a Gaussian distribution with zero mean and variance $\lambda_j$). You can think of these eigenfunctions as the directions of variation of the GP. There is more variability along $\phi_1$ followed by $\phi_2$ and so on. (Think of it like PCA in infinite dimensions.)
Representation (*) is good enough for understanding and you can just stop here. But the story doesn't end here. See the end of this post. To practically sample from the GP, find $\{\phi_j\}$ and $\lambda_j$, draw $\{g_j\}$ as i.i.d. standard normal variables and form $X(t) \approx \sum_{j=1}^N \sqrt{\lambda_j} g_j \phi_j(t)$ for some large enough $N$.
Example
Consider the Brownian motion on $[0,1]$ which is a centered sample-path-continuous GP with covariance matrix $\mathbb E [X(t) X(s)] = \min\{t,s\}$ for all $s,t \in [0,1]$. The eigenvalues and eigenfunctions are given by (see Example 12.23 in Wainwright's book)
$$
\phi_j(t) = \sin \frac{(2j-1)\pi t}{2}, \quad \lambda_j = \Big( \frac{2}{(2j-1)\pi}\Big)^2.
$$
Note that $\lambda_j \to 0$ as $j \to \infty$. Then, if you believe the K-L result, we can give a explicit construction of the Brownian motion as follows:
$$
X(t) = \sum_{j=1}^\infty \frac{2 g_j}{(2j-1)\pi} \, \sin \left( \frac{(2j-1)\pi t}{2}\right) = \sum_{k \; \text{odd}} \frac{2 g_k}{k\pi} \, \sin \left( \frac{k\pi t}{2}\right).
$$
where $g_1, g_2, g_3, \dots \stackrel{\text{i.i.d.}}{\sim} N(0,1)$.
Here is some code to draw two samples from this random function:
tvec = seq(0,1, length.out = 1000)
N = 500
set.seed(125)
g1 = rnorm(N)
g2 = rnorm(N)
X_realization = function(tvec, g) {
sapply(tvec, function(t) sum( sapply(1:N, function(j) 2*g[j]*sin((2*j-1)*pi*t/2)/(pi*(2*j-1)))) )
}
Xs1 = X_realization(tvec, g1)
Xs2 = X_realization(tvec, g2)
yrange = range(cbind(Xs1, Xs2))
plot(tvec, Xs1, type="l", ylim = yrange, main = "Two realizations of a Brownian motion", xlab = "t", ylab = "X(t)")
lines(tvec, Xs2, col="red")
And the resulting plot:
The rest of the story
It turns out that any complete orthonormal system of $H$ will do. So if you pick any sequence of functions $\{h_j\}$ that are orthonormal in $H$ and their closure spans the entire $H$, then, almost surely
$$
X(t) = \sum_{j=1}^\infty g_j h_j,
$$
where $\{g_j\}$ is some i.i.d. $N(0,1)$ sequence. This gives much flexibility in representing a GP and some basis might work better than the other in a specific application. (See Theorem 2.6.10 in Giné and Nickl's book).
In finite dimensions
EDIT: Since this post got some attention, let me add this too. The K-L expansion mentioned above is not unfamiliar to you if you have thought about how to sample a general multivariate Guassian vector $\mathbf x \sim N(\mu, \Sigma)$. What you would generally do is to write $\mathbf x = (x_i) = \mu + \Sigma^{1/2} \mathbf z \in \mathbb R^n$ where $\mathbf z \sim N(0,I)$ that is $\mathbf z = (z_i)$ with $z_1,\dots,z_n$ i.i.d. $\sim N(0,1)$ and $\Sigma^{1/2}$ is the matrix square root of $\Sigma$.
Another closely related approach is to perform eigen-decompisition on $\Sigma = U \Lambda U^T$ where $U = [\mathbf u_1 \mid \mathbf u_2 \mid \cdots \mid \mathbf u_n]$ is an orthogonal matrix whose columns are the eigenvectors of $\Sigma$ and $\Lambda = \text{diag}(\lambda_i)$ is the diagonal matrix of the corresponding eigenvalues. Then, we can generate $\mathbf x = \mu + U \Lambda^{1/2} \mathbf z$ (verify that this has the right distribution!). If you expand this equation you get
$$
\mathbf x = \mu + \sum_{i=1}^n \sqrt{\lambda_i} z_i \mathbf u_i
$$
which is just the finite-dimensional analog of the K-L expansion.
|
Elementary explanation of Gaussian Processes
A stochastic process $X(t), t \in T$ is a Gaussian process (GP) if $\sum_i a_i X(t_i)$ is a Gaussian random variable for any such linear combination. Equivalently, it is a GP if all its finite-dimensi
|
44,180
|
Elementary explanation of Gaussian Processes
|
You are given points sampled from a function, that is not known, this is your data. Traditional curve fitting algorithms would try to find a function that fits the data the best. Gaussian process learns distribution over functions, i.e. you could sample from this distribution the functions that are consistent with the data. This distribution is defined in terms of mean function and covariance function, they are defined in terms of kernels. Kernel can be thought as a measure of similarity between points, so it forces the Gaussian process functions to be more similar for points that are similar to each other, and can be dissimilar for points that are less similar to each other.
When using Gaussian processes in optimization, we use it as a surrogate model, so instead of optimizing the target function, we optimize the function approximated by Gaussian process. We do this when it is expensive to evaluate the target function, for example, it is a neural network that is expensive and takes long to train. Instead we approximate the function. Since Gaussian process, because of being defined in terms of kernels, it is more uncertain for the areas where it saw less data and more certain in areas with more data. Thanks to this, you can optimize, or sample from the distribution, by taking into consideration the uncertainty. You want to explore areas that you are uncertain about.
|
Elementary explanation of Gaussian Processes
|
You are given points sampled from a function, that is not known, this is your data. Traditional curve fitting algorithms would try to find a function that fits the data the best. Gaussian process lear
|
Elementary explanation of Gaussian Processes
You are given points sampled from a function, that is not known, this is your data. Traditional curve fitting algorithms would try to find a function that fits the data the best. Gaussian process learns distribution over functions, i.e. you could sample from this distribution the functions that are consistent with the data. This distribution is defined in terms of mean function and covariance function, they are defined in terms of kernels. Kernel can be thought as a measure of similarity between points, so it forces the Gaussian process functions to be more similar for points that are similar to each other, and can be dissimilar for points that are less similar to each other.
When using Gaussian processes in optimization, we use it as a surrogate model, so instead of optimizing the target function, we optimize the function approximated by Gaussian process. We do this when it is expensive to evaluate the target function, for example, it is a neural network that is expensive and takes long to train. Instead we approximate the function. Since Gaussian process, because of being defined in terms of kernels, it is more uncertain for the areas where it saw less data and more certain in areas with more data. Thanks to this, you can optimize, or sample from the distribution, by taking into consideration the uncertainty. You want to explore areas that you are uncertain about.
|
Elementary explanation of Gaussian Processes
You are given points sampled from a function, that is not known, this is your data. Traditional curve fitting algorithms would try to find a function that fits the data the best. Gaussian process lear
|
44,181
|
Elementary explanation of Gaussian Processes
|
I found this article very helpful towards building an intuition of GPs.
The mean and covariance functions you define for your prior distributions sets up the distribution that you can sample functions from, where the covariance affects the shape (wiggliness, trend, periodicity) of the functions.
Given observation points you use define a conditional distribution which now must pass through (or close to) your observation points. So if you have a observed points $Y$, and you are interested in the distribution of $X$ given $Y$ and you have assumed a Gaussian prior joint distribution with mean $\begin{pmatrix} \mu_X & \mu_Y \end{pmatrix}^\top$ and covariance matrix $\begin{pmatrix} \Sigma_{XX} & \Sigma_{XY} \\ \Sigma_{YX} & \Sigma_{YY}\end{pmatrix}$, the conditional distribution given the observations $Y$, $X|Y$ is given by
$$X|Y \sim \mathcal{N} (\mu_X + \Sigma_{XY}\Sigma_{YY}^{-1}(Y-\mu_Y), \Sigma_{XX} - \Sigma_{XY}\Sigma_{YY}^{-1}\Sigma_{YX}).$$
This distribution now looks more like this:
Once again it is a distribution, so functions can be sampled from it.
I also found this YouTube video helpful for building an intuition for kernel functions effect on shape.
(Images used from this article)
|
Elementary explanation of Gaussian Processes
|
I found this article very helpful towards building an intuition of GPs.
The mean and covariance functions you define for your prior distributions sets up the distribution that you can sample functions
|
Elementary explanation of Gaussian Processes
I found this article very helpful towards building an intuition of GPs.
The mean and covariance functions you define for your prior distributions sets up the distribution that you can sample functions from, where the covariance affects the shape (wiggliness, trend, periodicity) of the functions.
Given observation points you use define a conditional distribution which now must pass through (or close to) your observation points. So if you have a observed points $Y$, and you are interested in the distribution of $X$ given $Y$ and you have assumed a Gaussian prior joint distribution with mean $\begin{pmatrix} \mu_X & \mu_Y \end{pmatrix}^\top$ and covariance matrix $\begin{pmatrix} \Sigma_{XX} & \Sigma_{XY} \\ \Sigma_{YX} & \Sigma_{YY}\end{pmatrix}$, the conditional distribution given the observations $Y$, $X|Y$ is given by
$$X|Y \sim \mathcal{N} (\mu_X + \Sigma_{XY}\Sigma_{YY}^{-1}(Y-\mu_Y), \Sigma_{XX} - \Sigma_{XY}\Sigma_{YY}^{-1}\Sigma_{YX}).$$
This distribution now looks more like this:
Once again it is a distribution, so functions can be sampled from it.
I also found this YouTube video helpful for building an intuition for kernel functions effect on shape.
(Images used from this article)
|
Elementary explanation of Gaussian Processes
I found this article very helpful towards building an intuition of GPs.
The mean and covariance functions you define for your prior distributions sets up the distribution that you can sample functions
|
44,182
|
Suppose $\mathbf{X, Y}$ are independent random vectors. Are their components independent? [duplicate]
|
If the two vectors are indepdendent, we have $p(\textbf{X,Y})=p(\textbf{X})p(\textbf{Y})$. Considering a specific pair $X_i$,$Y_j$, $$\begin{align}p(X_i,Y_j) &=\int_{X_k,k\neq i}\int_{Y_m,m\neq j}P(\textbf{X},\textbf{Y})\\ &=\int_{X_k,k\neq i}\int_{Y_m,m\neq j}P(\textbf{X})P(\textbf{Y})\\ &=\int_{X_k,k\neq i}P(\textbf{X})\int_{Y_m,m\neq j}P(\textbf{Y}) \\ &=P(X_i)P(Y_j)\end{align}$$ So, they're independent, which means $\operatorname{cov}(X_i,Y_j)=0$. But, having $\operatorname{cov}(X_i,Y_j)=0$ doesn't mean that the two are independent, as you asked.
|
Suppose $\mathbf{X, Y}$ are independent random vectors. Are their components independent? [duplicate
|
If the two vectors are indepdendent, we have $p(\textbf{X,Y})=p(\textbf{X})p(\textbf{Y})$. Considering a specific pair $X_i$,$Y_j$, $$\begin{align}p(X_i,Y_j) &=\int_{X_k,k\neq i}\int_{Y_m,m\neq j}P(\t
|
Suppose $\mathbf{X, Y}$ are independent random vectors. Are their components independent? [duplicate]
If the two vectors are indepdendent, we have $p(\textbf{X,Y})=p(\textbf{X})p(\textbf{Y})$. Considering a specific pair $X_i$,$Y_j$, $$\begin{align}p(X_i,Y_j) &=\int_{X_k,k\neq i}\int_{Y_m,m\neq j}P(\textbf{X},\textbf{Y})\\ &=\int_{X_k,k\neq i}\int_{Y_m,m\neq j}P(\textbf{X})P(\textbf{Y})\\ &=\int_{X_k,k\neq i}P(\textbf{X})\int_{Y_m,m\neq j}P(\textbf{Y}) \\ &=P(X_i)P(Y_j)\end{align}$$ So, they're independent, which means $\operatorname{cov}(X_i,Y_j)=0$. But, having $\operatorname{cov}(X_i,Y_j)=0$ doesn't mean that the two are independent, as you asked.
|
Suppose $\mathbf{X, Y}$ are independent random vectors. Are their components independent? [duplicate
If the two vectors are indepdendent, we have $p(\textbf{X,Y})=p(\textbf{X})p(\textbf{Y})$. Considering a specific pair $X_i$,$Y_j$, $$\begin{align}p(X_i,Y_j) &=\int_{X_k,k\neq i}\int_{Y_m,m\neq j}P(\t
|
44,183
|
Suppose $\mathbf{X, Y}$ are independent random vectors. Are their components independent? [duplicate]
|
In addition to the answer by @gunes, here it is better to use the definitions directly. Two random variables (or vectors, as in this case) $\mathbf{X}, \mathbf{Y}$ are independent if all events determined by $\mathbf{X}$ are independent from all events determined by $\mathbf{Y}^\dagger$.
But an event determined by $X_i$ is certainly (indirectly) determined by $\mathbf{X}$. So the conclusion follows directly from the definition, without any need for integration or summation.
$^\dagger$ events determined by $\mathbf{X}$ means *member of the $\sigma$-algebra generated by $\mathbf{X}$.
|
Suppose $\mathbf{X, Y}$ are independent random vectors. Are their components independent? [duplicate
|
In addition to the answer by @gunes, here it is better to use the definitions directly. Two random variables (or vectors, as in this case) $\mathbf{X}, \mathbf{Y}$ are independent if all events determ
|
Suppose $\mathbf{X, Y}$ are independent random vectors. Are their components independent? [duplicate]
In addition to the answer by @gunes, here it is better to use the definitions directly. Two random variables (or vectors, as in this case) $\mathbf{X}, \mathbf{Y}$ are independent if all events determined by $\mathbf{X}$ are independent from all events determined by $\mathbf{Y}^\dagger$.
But an event determined by $X_i$ is certainly (indirectly) determined by $\mathbf{X}$. So the conclusion follows directly from the definition, without any need for integration or summation.
$^\dagger$ events determined by $\mathbf{X}$ means *member of the $\sigma$-algebra generated by $\mathbf{X}$.
|
Suppose $\mathbf{X, Y}$ are independent random vectors. Are their components independent? [duplicate
In addition to the answer by @gunes, here it is better to use the definitions directly. Two random variables (or vectors, as in this case) $\mathbf{X}, \mathbf{Y}$ are independent if all events determ
|
44,184
|
Can log likelihood function be negative
|
The likelihood function is defined as
$$
\mathcal{L}(\theta|X) = \prod_{i=1}^n f_\theta(X_i)
$$
and is a product of probability mass functions (discrete variables) or probability density functions (continuous variables) $f_\theta$ parametrized by $\theta$ and evaluated at the $X_i$ points.
Probability densities are non-negative, while probabilities also are less or equal to one. It follows that their product cannot be negative. The natural logarithm function is negative for values less than one and positive for values greater than one. So yes, it is possible that you end up with a negative value for log-likelihood (for discrete variables it will always be so).
|
Can log likelihood function be negative
|
The likelihood function is defined as
$$
\mathcal{L}(\theta|X) = \prod_{i=1}^n f_\theta(X_i)
$$
and is a product of probability mass functions (discrete variables) or probability density functions (co
|
Can log likelihood function be negative
The likelihood function is defined as
$$
\mathcal{L}(\theta|X) = \prod_{i=1}^n f_\theta(X_i)
$$
and is a product of probability mass functions (discrete variables) or probability density functions (continuous variables) $f_\theta$ parametrized by $\theta$ and evaluated at the $X_i$ points.
Probability densities are non-negative, while probabilities also are less or equal to one. It follows that their product cannot be negative. The natural logarithm function is negative for values less than one and positive for values greater than one. So yes, it is possible that you end up with a negative value for log-likelihood (for discrete variables it will always be so).
|
Can log likelihood function be negative
The likelihood function is defined as
$$
\mathcal{L}(\theta|X) = \prod_{i=1}^n f_\theta(X_i)
$$
and is a product of probability mass functions (discrete variables) or probability density functions (co
|
44,185
|
Why did I get a negative adjusted-$R^2$ in simple linear regression?
|
Adjusted $R^2$ is:
$${R}_\text{adj}^{2}={1-(1-R^{2}){n-1 \over n-p-1}}$$
where $p$ is the number of predictors (not counting the intercept) and $n$ is the number of observations.
This will be less than $0$ when
$$\frac{p}{n-1}>R^2\,.$$
$R^2$ can be as low as $0$, so this may happen any time $p>0$. This means that it can indeed happen with $p=1$.
|
Why did I get a negative adjusted-$R^2$ in simple linear regression?
|
Adjusted $R^2$ is:
$${R}_\text{adj}^{2}={1-(1-R^{2}){n-1 \over n-p-1}}$$
where $p$ is the number of predictors (not counting the intercept) and $n$ is the number of observations.
This will be less tha
|
Why did I get a negative adjusted-$R^2$ in simple linear regression?
Adjusted $R^2$ is:
$${R}_\text{adj}^{2}={1-(1-R^{2}){n-1 \over n-p-1}}$$
where $p$ is the number of predictors (not counting the intercept) and $n$ is the number of observations.
This will be less than $0$ when
$$\frac{p}{n-1}>R^2\,.$$
$R^2$ can be as low as $0$, so this may happen any time $p>0$. This means that it can indeed happen with $p=1$.
|
Why did I get a negative adjusted-$R^2$ in simple linear regression?
Adjusted $R^2$ is:
$${R}_\text{adj}^{2}={1-(1-R^{2}){n-1 \over n-p-1}}$$
where $p$ is the number of predictors (not counting the intercept) and $n$ is the number of observations.
This will be less tha
|
44,186
|
Why did I get a negative adjusted-$R^2$ in simple linear regression?
|
A way to conceptualize this is that an adjusted $R^2$ estimates the population $R^2$, so an unbiased estimator of a population $R^2$ of zero has to average zero, thus necessitating that some sample estimates must be below zero.
|
Why did I get a negative adjusted-$R^2$ in simple linear regression?
|
A way to conceptualize this is that an adjusted $R^2$ estimates the population $R^2$, so an unbiased estimator of a population $R^2$ of zero has to average zero, thus necessitating that some sample es
|
Why did I get a negative adjusted-$R^2$ in simple linear regression?
A way to conceptualize this is that an adjusted $R^2$ estimates the population $R^2$, so an unbiased estimator of a population $R^2$ of zero has to average zero, thus necessitating that some sample estimates must be below zero.
|
Why did I get a negative adjusted-$R^2$ in simple linear regression?
A way to conceptualize this is that an adjusted $R^2$ estimates the population $R^2$, so an unbiased estimator of a population $R^2$ of zero has to average zero, thus necessitating that some sample es
|
44,187
|
Relationship between logistic regression and Softmax Regression with 2 classes
|
Suppose you have a binary classification problem with $p$ features (including bias) and you do Multi-class regression with softmax activation. Then, the probability of an observation, $x,$ representing class $1$ is,
$$
\begin{split}
p_1(x) &= \frac{\exp(\beta_1^T x)}{\exp(\beta_1^T x) + \exp(\beta_2^T x)} \\
&= \frac{1}{1 + \exp[(\beta_2 - \beta_1)^T x]} = \sigma\left([\beta_1 - \beta_2]^T x\right).
\end{split}
$$
where both $\beta_1$ and $\beta_2$ are $p \times 1$ parameters, and I simply multiplied both numerator and denominator by $\exp(- \beta_1^T x).$ Note that the RHS of the above expression is simply the sigmoid function. Thus, the $p \times 1$ parameter $\beta$ you would get from doing ordinary binary logistic regression with sigmoid activation is analogous to $\beta_1 - \beta_2,$ where the latter parameters are from multiclass softmax regression.
|
Relationship between logistic regression and Softmax Regression with 2 classes
|
Suppose you have a binary classification problem with $p$ features (including bias) and you do Multi-class regression with softmax activation. Then, the probability of an observation, $x,$ representin
|
Relationship between logistic regression and Softmax Regression with 2 classes
Suppose you have a binary classification problem with $p$ features (including bias) and you do Multi-class regression with softmax activation. Then, the probability of an observation, $x,$ representing class $1$ is,
$$
\begin{split}
p_1(x) &= \frac{\exp(\beta_1^T x)}{\exp(\beta_1^T x) + \exp(\beta_2^T x)} \\
&= \frac{1}{1 + \exp[(\beta_2 - \beta_1)^T x]} = \sigma\left([\beta_1 - \beta_2]^T x\right).
\end{split}
$$
where both $\beta_1$ and $\beta_2$ are $p \times 1$ parameters, and I simply multiplied both numerator and denominator by $\exp(- \beta_1^T x).$ Note that the RHS of the above expression is simply the sigmoid function. Thus, the $p \times 1$ parameter $\beta$ you would get from doing ordinary binary logistic regression with sigmoid activation is analogous to $\beta_1 - \beta_2,$ where the latter parameters are from multiclass softmax regression.
|
Relationship between logistic regression and Softmax Regression with 2 classes
Suppose you have a binary classification problem with $p$ features (including bias) and you do Multi-class regression with softmax activation. Then, the probability of an observation, $x,$ representin
|
44,188
|
Relationship between logistic regression and Softmax Regression with 2 classes
|
In multinomial regression we model odds of observing $Y=k$ for each of the $K-1$ classes relatively to the $K$-th class. So with $K=2$ the model reduces to logistic regression.
Let me quote Wikipedia:
One fairly simple way to arrive at the multinomial logit model is to
imagine, for $K$ possible outcomes, running $K-1$ independent
binary logistic regression models, in which one outcome is chosen as a
"pivot" and then the other $K-1$ outcomes are separately regressed
against the pivot outcome. This would proceed as follows, if outcome
$K$ (the last outcome) is chosen as the pivot:
$$ \begin{align} \ln \frac{\Pr(Y_i=1)}{\Pr(Y_i=K)} &=
\boldsymbol\beta_1 \cdot \mathbf{X}_i \\ \ln
\frac{\Pr(Y_i=2)}{\Pr(Y_i=K)} &= \boldsymbol\beta_2 \cdot \mathbf{X}_i
\\ \cdots & \cdots \\ \ln \frac{\Pr(Y_i=K-1)}{\Pr(Y_i=K)} &=
\boldsymbol\beta_{K-1} \cdot \mathbf{X}_i \\ \end{align} $$
(...) Using the fact that all $K$ of the probabilities must sum to
one, we find:
$$ \Pr(Y_i=K) = 1 - \sum_{k=1}^{K-1}{\Pr(Y_i=K)}e^{\boldsymbol\beta_k
\cdot \mathbf{X}_i} \Rightarrow \Pr(Y_i=K) = \frac{1}{1 +
\sum_{k=1}^{K-1} e^{\boldsymbol\beta_k \cdot \mathbf{X}_i}} $$
We can use this to find the other probabilities:
$$ \begin{align} \Pr(Y_i=1) &= \frac{e^{\boldsymbol\beta_1 \cdot
\mathbf{X}_i}}{1 + \sum_{k=1}^{K-1} e^{\boldsymbol\beta_k \cdot
\mathbf{X}_i}} \\ \Pr(Y_i=2) &= \frac{e^{\boldsymbol\beta_2 \cdot
\mathbf{X}_i}}{1 + \sum_{k=1}^{K-1} e^{\boldsymbol\beta_k \cdot
\mathbf{X}_i}} \\ \cdots & \cdots \\ \Pr(Y_i=K-1) &=
\frac{e^{\boldsymbol\beta_{K-1} \cdot \mathbf{X}_i}}{1 +
\sum_{k=1}^{K-1} e^{\boldsymbol\beta_k \cdot \mathbf{X}_i}} \\
\end{align} $$
With logistic regression you have only
$$
\ln \frac{\Pr(Y_i=1)}{\Pr(Y_i=0)} =
\boldsymbol\beta \cdot \mathbf{X}_i
$$
and probabilities derived from it. Logistic regression and multinomial regression with $K=2$ are the same thing.
Example
If you take the data from this online tutorial and run logistic regression and multinomial regression on it, you'll see exactly the same results (compare the coefficients):
library(nnet)
mydata <- read.csv("https://stats.idre.ucla.edu/stat/data/binary.csv")
mydata$admit2 <- ifelse(mydata$admit == 1, 0, 1)
##
## glm(admit ~ gre + gpa, data = mydata, family = "binomial")
##
## Call: glm(formula = admit ~ gre + gpa, family = "binomial", data = mydata)
##
## Coefficients:
## (Intercept) gre gpa
## -4.949378 0.002691 0.754687
##
## Degrees of Freedom: 399 Total (i.e. Null); 397 Residual
## Null Deviance: 500
## Residual Deviance: 480.3 AIC: 486.3
multinom(cbind(admit2, admit) ~ gre + gpa, data = mydata, family = "binomial")
##
## # weights: 8 (3 variable)
## initial value 277.258872
## final value 240.171991
## converged
## Call:
## multinom(formula = cbind(admit2, admit) ~ gre + gpa, data = mydata,
## family = "binomial")
##
## Coefficients:
## (Intercept) gre gpa
## admit -4.949375 0.002690691 0.7546848
##
## Residual Deviance: 480.344
## AIC: 486.344
|
Relationship between logistic regression and Softmax Regression with 2 classes
|
In multinomial regression we model odds of observing $Y=k$ for each of the $K-1$ classes relatively to the $K$-th class. So with $K=2$ the model reduces to logistic regression.
Let me quote Wikipedia:
|
Relationship between logistic regression and Softmax Regression with 2 classes
In multinomial regression we model odds of observing $Y=k$ for each of the $K-1$ classes relatively to the $K$-th class. So with $K=2$ the model reduces to logistic regression.
Let me quote Wikipedia:
One fairly simple way to arrive at the multinomial logit model is to
imagine, for $K$ possible outcomes, running $K-1$ independent
binary logistic regression models, in which one outcome is chosen as a
"pivot" and then the other $K-1$ outcomes are separately regressed
against the pivot outcome. This would proceed as follows, if outcome
$K$ (the last outcome) is chosen as the pivot:
$$ \begin{align} \ln \frac{\Pr(Y_i=1)}{\Pr(Y_i=K)} &=
\boldsymbol\beta_1 \cdot \mathbf{X}_i \\ \ln
\frac{\Pr(Y_i=2)}{\Pr(Y_i=K)} &= \boldsymbol\beta_2 \cdot \mathbf{X}_i
\\ \cdots & \cdots \\ \ln \frac{\Pr(Y_i=K-1)}{\Pr(Y_i=K)} &=
\boldsymbol\beta_{K-1} \cdot \mathbf{X}_i \\ \end{align} $$
(...) Using the fact that all $K$ of the probabilities must sum to
one, we find:
$$ \Pr(Y_i=K) = 1 - \sum_{k=1}^{K-1}{\Pr(Y_i=K)}e^{\boldsymbol\beta_k
\cdot \mathbf{X}_i} \Rightarrow \Pr(Y_i=K) = \frac{1}{1 +
\sum_{k=1}^{K-1} e^{\boldsymbol\beta_k \cdot \mathbf{X}_i}} $$
We can use this to find the other probabilities:
$$ \begin{align} \Pr(Y_i=1) &= \frac{e^{\boldsymbol\beta_1 \cdot
\mathbf{X}_i}}{1 + \sum_{k=1}^{K-1} e^{\boldsymbol\beta_k \cdot
\mathbf{X}_i}} \\ \Pr(Y_i=2) &= \frac{e^{\boldsymbol\beta_2 \cdot
\mathbf{X}_i}}{1 + \sum_{k=1}^{K-1} e^{\boldsymbol\beta_k \cdot
\mathbf{X}_i}} \\ \cdots & \cdots \\ \Pr(Y_i=K-1) &=
\frac{e^{\boldsymbol\beta_{K-1} \cdot \mathbf{X}_i}}{1 +
\sum_{k=1}^{K-1} e^{\boldsymbol\beta_k \cdot \mathbf{X}_i}} \\
\end{align} $$
With logistic regression you have only
$$
\ln \frac{\Pr(Y_i=1)}{\Pr(Y_i=0)} =
\boldsymbol\beta \cdot \mathbf{X}_i
$$
and probabilities derived from it. Logistic regression and multinomial regression with $K=2$ are the same thing.
Example
If you take the data from this online tutorial and run logistic regression and multinomial regression on it, you'll see exactly the same results (compare the coefficients):
library(nnet)
mydata <- read.csv("https://stats.idre.ucla.edu/stat/data/binary.csv")
mydata$admit2 <- ifelse(mydata$admit == 1, 0, 1)
##
## glm(admit ~ gre + gpa, data = mydata, family = "binomial")
##
## Call: glm(formula = admit ~ gre + gpa, family = "binomial", data = mydata)
##
## Coefficients:
## (Intercept) gre gpa
## -4.949378 0.002691 0.754687
##
## Degrees of Freedom: 399 Total (i.e. Null); 397 Residual
## Null Deviance: 500
## Residual Deviance: 480.3 AIC: 486.3
multinom(cbind(admit2, admit) ~ gre + gpa, data = mydata, family = "binomial")
##
## # weights: 8 (3 variable)
## initial value 277.258872
## final value 240.171991
## converged
## Call:
## multinom(formula = cbind(admit2, admit) ~ gre + gpa, data = mydata,
## family = "binomial")
##
## Coefficients:
## (Intercept) gre gpa
## admit -4.949375 0.002690691 0.7546848
##
## Residual Deviance: 480.344
## AIC: 486.344
|
Relationship between logistic regression and Softmax Regression with 2 classes
In multinomial regression we model odds of observing $Y=k$ for each of the $K-1$ classes relatively to the $K$-th class. So with $K=2$ the model reduces to logistic regression.
Let me quote Wikipedia:
|
44,189
|
Regression trees - how are splits decided
|
Well, it depends on the implementation you are using. I assume we are talking about the original CART paper [1]
1) then there is always a single split resulting in two children.
2) The value used for splitting is determined by testing every value for every variable, that the one which minimizes the sum of squares error (SSE) best is chosen:
$SSE=\sum_{i\in S_1}({y_i- \bar{y}_1})^2+\sum_{i\in S_2}({y_i- \bar{y}_2})^2$
In the equation above $y_i$ is your predictors value and $\bar{y}_1$ and $\bar{y}_2$ the mean value of the left and right hand side of the possible split.
When feeding the following data matrix to the CART routine, every value for every variable/feature/column (here A,B and C) would be tested using SSE.
A B C y
0.05 0.31 0.51 0.97
0.32 0.41 0.88 0.89
0.76 0.61 0.48 0.11
0.81 0.94 0.85 0.19
The one minimizing SSE best, would be chosen for split. CART would test all possible splits using all values for variable A (0.05, 0.32, 0.76 and 0.81) and then using variable B, then C.
[1] Breiman, Leo, et al. Classification and regression trees. CRC press, 1984.
|
Regression trees - how are splits decided
|
Well, it depends on the implementation you are using. I assume we are talking about the original CART paper [1]
1) then there is always a single split resulting in two children.
2) The value used for
|
Regression trees - how are splits decided
Well, it depends on the implementation you are using. I assume we are talking about the original CART paper [1]
1) then there is always a single split resulting in two children.
2) The value used for splitting is determined by testing every value for every variable, that the one which minimizes the sum of squares error (SSE) best is chosen:
$SSE=\sum_{i\in S_1}({y_i- \bar{y}_1})^2+\sum_{i\in S_2}({y_i- \bar{y}_2})^2$
In the equation above $y_i$ is your predictors value and $\bar{y}_1$ and $\bar{y}_2$ the mean value of the left and right hand side of the possible split.
When feeding the following data matrix to the CART routine, every value for every variable/feature/column (here A,B and C) would be tested using SSE.
A B C y
0.05 0.31 0.51 0.97
0.32 0.41 0.88 0.89
0.76 0.61 0.48 0.11
0.81 0.94 0.85 0.19
The one minimizing SSE best, would be chosen for split. CART would test all possible splits using all values for variable A (0.05, 0.32, 0.76 and 0.81) and then using variable B, then C.
[1] Breiman, Leo, et al. Classification and regression trees. CRC press, 1984.
|
Regression trees - how are splits decided
Well, it depends on the implementation you are using. I assume we are talking about the original CART paper [1]
1) then there is always a single split resulting in two children.
2) The value used for
|
44,190
|
Is this really perfect separation in logistic regression, or is something else going on?
|
Looking at this
Coefficients:
(Intercept) SEX
-3.157e+01 -2.249e-13
I see that your model is returning a numeric zero for the coefficient of SEX ($-2.2 \times 10^{-13}$ may as well be $0$), and is driving the intercept to $-31.57$. Plugging that value into the logistic function in my R interpreter I get
> 1/(1 + exp(-31.57))
[1] 1
So you don't really have perfect separation except in a degenerate sense; your model is saying there is a probability of one of a suicide for every record.
I can't say why this is so without seeing your data, but I would hypothesize it is an encoding error in how you are passing the response to the model. Make sure that your response column is coded as an indicator variable, $0$ for no suicide, $1$ for a suicide.
In order to construct my model, I intended to individually regress SI on each covariate, and use the p-value from the likelihood ratio test for each model to inform which covariates should be considered for the backward model selection.
I can't help but comment that this is a a poor procedure. Regressing a response on individual predictors tells you next to nothing about the structure of a multivariate model. Backwards selection also has it's own host of problems, as you will find if you search this site for the term.
If you want to do variable selection, please consider a more principled method like glmnet.
|
Is this really perfect separation in logistic regression, or is something else going on?
|
Looking at this
Coefficients:
(Intercept) SEX
-3.157e+01 -2.249e-13
I see that your model is returning a numeric zero for the coefficient of SEX ($-2.2 \times 10^{-13}$ may as well be $
|
Is this really perfect separation in logistic regression, or is something else going on?
Looking at this
Coefficients:
(Intercept) SEX
-3.157e+01 -2.249e-13
I see that your model is returning a numeric zero for the coefficient of SEX ($-2.2 \times 10^{-13}$ may as well be $0$), and is driving the intercept to $-31.57$. Plugging that value into the logistic function in my R interpreter I get
> 1/(1 + exp(-31.57))
[1] 1
So you don't really have perfect separation except in a degenerate sense; your model is saying there is a probability of one of a suicide for every record.
I can't say why this is so without seeing your data, but I would hypothesize it is an encoding error in how you are passing the response to the model. Make sure that your response column is coded as an indicator variable, $0$ for no suicide, $1$ for a suicide.
In order to construct my model, I intended to individually regress SI on each covariate, and use the p-value from the likelihood ratio test for each model to inform which covariates should be considered for the backward model selection.
I can't help but comment that this is a a poor procedure. Regressing a response on individual predictors tells you next to nothing about the structure of a multivariate model. Backwards selection also has it's own host of problems, as you will find if you search this site for the term.
If you want to do variable selection, please consider a more principled method like glmnet.
|
Is this really perfect separation in logistic regression, or is something else going on?
Looking at this
Coefficients:
(Intercept) SEX
-3.157e+01 -2.249e-13
I see that your model is returning a numeric zero for the coefficient of SEX ($-2.2 \times 10^{-13}$ may as well be $
|
44,191
|
Is this really perfect separation in logistic regression, or is something else going on?
|
I think with a sample of 16000 is unlikely you have perfect prediction, try doing cross tabulations of each variable before doing the individual logit models and see if there is perfect prediction. This way you can also check if the response variable is coded as an indicator.
|
Is this really perfect separation in logistic regression, or is something else going on?
|
I think with a sample of 16000 is unlikely you have perfect prediction, try doing cross tabulations of each variable before doing the individual logit models and see if there is perfect prediction. Th
|
Is this really perfect separation in logistic regression, or is something else going on?
I think with a sample of 16000 is unlikely you have perfect prediction, try doing cross tabulations of each variable before doing the individual logit models and see if there is perfect prediction. This way you can also check if the response variable is coded as an indicator.
|
Is this really perfect separation in logistic regression, or is something else going on?
I think with a sample of 16000 is unlikely you have perfect prediction, try doing cross tabulations of each variable before doing the individual logit models and see if there is perfect prediction. Th
|
44,192
|
Is this really perfect separation in logistic regression, or is something else going on?
|
First, it's upsetting to see that your statistics professor is training you to use stepwise selection for model building. See this page for an introduction to the problems with stepwise selection and for choices of better alternatives; follow the stepwise-regression and model-selection tags on this site.
With 3 levels of the categorical "SEX" variable, according to your table, and only 1 (essentially zero) coefficient in the glm output, something is fishy. It might somehow originate from your inclusion of an "Unknown" category with very few cases (and which would seem to be the reference category of that factor from your table), or with glm somehow interpreting "SEX" as a numeric rather than a categorical variable. Code missing data as NA throughout rather than label them as "Unknown" and then remove the unused "Unknown" levels with droplevels()in R.
|
Is this really perfect separation in logistic regression, or is something else going on?
|
First, it's upsetting to see that your statistics professor is training you to use stepwise selection for model building. See this page for an introduction to the problems with stepwise selection and
|
Is this really perfect separation in logistic regression, or is something else going on?
First, it's upsetting to see that your statistics professor is training you to use stepwise selection for model building. See this page for an introduction to the problems with stepwise selection and for choices of better alternatives; follow the stepwise-regression and model-selection tags on this site.
With 3 levels of the categorical "SEX" variable, according to your table, and only 1 (essentially zero) coefficient in the glm output, something is fishy. It might somehow originate from your inclusion of an "Unknown" category with very few cases (and which would seem to be the reference category of that factor from your table), or with glm somehow interpreting "SEX" as a numeric rather than a categorical variable. Code missing data as NA throughout rather than label them as "Unknown" and then remove the unused "Unknown" levels with droplevels()in R.
|
Is this really perfect separation in logistic regression, or is something else going on?
First, it's upsetting to see that your statistics professor is training you to use stepwise selection for model building. See this page for an introduction to the problems with stepwise selection and
|
44,193
|
What value of $\alpha$ makes $\sum_{i=0}^n (x_i-\alpha)^2$ minimal?
|
As I mentioned in comments, showing what minimizes $\sum (x_i-\alpha)^2$ can be done in several ways, such as by simple calculus, or by writing $\sum (x_i-\alpha)^2=\sum (x_i-\bar{x}+\bar{x}-\alpha)^2$. Let's look at the second one:
$\sum (x_i-\alpha)^2=\sum (x_i-\bar{x}+\bar{x}-\alpha)^2$
$\hspace{2.55cm}=\sum (x_i-\bar{x})^2+\sum(\bar{x}-\alpha)^2+2\sum(x_i-\bar{x})(\bar{x}-\alpha)$
$\hspace{2.55cm}=\sum (x_i-\bar{x})^2+\sum(\bar{x}-\alpha)^2+2(\bar{x}-\alpha)\sum(x_i-\bar{x})$
$\hspace{2.55cm}=\sum (x_i-\bar{x})^2+\sum(\bar{x}-\alpha)^2+2(\bar{x}-\alpha)\cdot 0$
$\hspace{2.55cm}=\sum (x_i-\bar{x})^2+\sum(\bar{x}-\alpha)^2$
Now the first term is unaltered by the choice of $\alpha$ and the last term can be made zero by setting $\alpha=\bar{x}$; any other choice leads to a larger value of the second term. Hence that expression is minimized by setting $\alpha=\bar{x}$.
|
What value of $\alpha$ makes $\sum_{i=0}^n (x_i-\alpha)^2$ minimal?
|
As I mentioned in comments, showing what minimizes $\sum (x_i-\alpha)^2$ can be done in several ways, such as by simple calculus, or by writing $\sum (x_i-\alpha)^2=\sum (x_i-\bar{x}+\bar{x}-\alpha)^2
|
What value of $\alpha$ makes $\sum_{i=0}^n (x_i-\alpha)^2$ minimal?
As I mentioned in comments, showing what minimizes $\sum (x_i-\alpha)^2$ can be done in several ways, such as by simple calculus, or by writing $\sum (x_i-\alpha)^2=\sum (x_i-\bar{x}+\bar{x}-\alpha)^2$. Let's look at the second one:
$\sum (x_i-\alpha)^2=\sum (x_i-\bar{x}+\bar{x}-\alpha)^2$
$\hspace{2.55cm}=\sum (x_i-\bar{x})^2+\sum(\bar{x}-\alpha)^2+2\sum(x_i-\bar{x})(\bar{x}-\alpha)$
$\hspace{2.55cm}=\sum (x_i-\bar{x})^2+\sum(\bar{x}-\alpha)^2+2(\bar{x}-\alpha)\sum(x_i-\bar{x})$
$\hspace{2.55cm}=\sum (x_i-\bar{x})^2+\sum(\bar{x}-\alpha)^2+2(\bar{x}-\alpha)\cdot 0$
$\hspace{2.55cm}=\sum (x_i-\bar{x})^2+\sum(\bar{x}-\alpha)^2$
Now the first term is unaltered by the choice of $\alpha$ and the last term can be made zero by setting $\alpha=\bar{x}$; any other choice leads to a larger value of the second term. Hence that expression is minimized by setting $\alpha=\bar{x}$.
|
What value of $\alpha$ makes $\sum_{i=0}^n (x_i-\alpha)^2$ minimal?
As I mentioned in comments, showing what minimizes $\sum (x_i-\alpha)^2$ can be done in several ways, such as by simple calculus, or by writing $\sum (x_i-\alpha)^2=\sum (x_i-\bar{x}+\bar{x}-\alpha)^2
|
44,194
|
What value of $\alpha$ makes $\sum_{i=0}^n (x_i-\alpha)^2$ minimal?
|
Putting the first derivative (with respect to $\alpha$) equal to zero you find $2\sum_i (x_i -\alpha) (-1) = 0$ so $\sum_i x_i = n \alpha$ or $\alpha = \frac{1}{n} \sum_i x_i$
|
What value of $\alpha$ makes $\sum_{i=0}^n (x_i-\alpha)^2$ minimal?
|
Putting the first derivative (with respect to $\alpha$) equal to zero you find $2\sum_i (x_i -\alpha) (-1) = 0$ so $\sum_i x_i = n \alpha$ or $\alpha = \frac{1}{n} \sum_i x_i$
|
What value of $\alpha$ makes $\sum_{i=0}^n (x_i-\alpha)^2$ minimal?
Putting the first derivative (with respect to $\alpha$) equal to zero you find $2\sum_i (x_i -\alpha) (-1) = 0$ so $\sum_i x_i = n \alpha$ or $\alpha = \frac{1}{n} \sum_i x_i$
|
What value of $\alpha$ makes $\sum_{i=0}^n (x_i-\alpha)^2$ minimal?
Putting the first derivative (with respect to $\alpha$) equal to zero you find $2\sum_i (x_i -\alpha) (-1) = 0$ so $\sum_i x_i = n \alpha$ or $\alpha = \frac{1}{n} \sum_i x_i$
|
44,195
|
Is the accessible population a random sample?
|
This is an important question, made explicit by Deming and Stephan (1941), who used first the word "superpopulation" to describe the approach with that name: assume that the current population is itself a sample from a larger, hypothetical, population. The concept is implicit also in Cochran (1939). See Stanek, 2000b, where I first found the reference to Cochran's paper.
If the students each year are drawn from the this superpopulation and teaching remains the same, then consider the available population to be a simple random sample and use the appropriate survey-design-based analyses (Deming, 1966, pp 247-261). There are also model-based superpopulation solutions, e.g. that observations are drawn from normal distributions, but these are stronger assumptions. I would also avoid inference based on likelihood ratios.
If, however, there are random or systematic (e.g. temporal trend) differences between students each year, then you would need several years of data to estimate these effects and incorporate them into into your analyses.
If the teaching content (or instructor) also changes from year-to-year, then you have an additional source of difference that will be difficult to predict.
The bottom line: you can analyze the class as if it represents future classes, but you must qualify your conclusions by stating the problems with this assumption.
I have answered related questions elsewhere on SO. See, e.g.
Applying inferential statistics for census data
Justifying the use of finite population correction
Adjusting any power analysis with FPC?
For some other references on the superpopulation approach, see: Korn and Graubard, 1999,p.227); Gelman, 2009; and a couple of unpublished notes by Ed Stanek (2000 a,b). The first paper contains an incomplete set of references.
References
Cochran, W. G. (1939). "The use of analysis of variance in enumeration by
sampling.,"Journal of the American Statistical Association,34:492-51
Cochran, W. G. (1977). Sampling techniques (3rd Ed.). New York: Wiley.
Deming, W Edwards, and Frederick F Stephan. (1941). On the interpretation of censuses as samples. Journal of the American Statistical Association 36, no. 213: 45-49
Deming, W. E. (1966). Some theory of sampling. New York: Dover Publications.
Andrew Gelman, 2009. How does statistical analysis differ when analyzing the entire population rather than a sample? http://andrewgelman.com/2009/07/03/how_does_statis/
Korn, E. L., & Graubard, B. I. (1999). Analysis of health surveys (Wiley series in probability and statistics). New York: Wiley.
Ed Stanek (2000a) Ideas on Superpopulation Models and Inference http://www.umass.edu/cluster/ed/unpublication/yr2000/c00ed62.PDF
Ed Stanek (2000b) Superpopulations and Superpopulation Models
http://www.umass.edu/cluster/ed/unpublication/yr2000/c00ed64v1.PDF
|
Is the accessible population a random sample?
|
This is an important question, made explicit by Deming and Stephan (1941), who used first the word "superpopulation" to describe the approach with that name: assume that the current population is its
|
Is the accessible population a random sample?
This is an important question, made explicit by Deming and Stephan (1941), who used first the word "superpopulation" to describe the approach with that name: assume that the current population is itself a sample from a larger, hypothetical, population. The concept is implicit also in Cochran (1939). See Stanek, 2000b, where I first found the reference to Cochran's paper.
If the students each year are drawn from the this superpopulation and teaching remains the same, then consider the available population to be a simple random sample and use the appropriate survey-design-based analyses (Deming, 1966, pp 247-261). There are also model-based superpopulation solutions, e.g. that observations are drawn from normal distributions, but these are stronger assumptions. I would also avoid inference based on likelihood ratios.
If, however, there are random or systematic (e.g. temporal trend) differences between students each year, then you would need several years of data to estimate these effects and incorporate them into into your analyses.
If the teaching content (or instructor) also changes from year-to-year, then you have an additional source of difference that will be difficult to predict.
The bottom line: you can analyze the class as if it represents future classes, but you must qualify your conclusions by stating the problems with this assumption.
I have answered related questions elsewhere on SO. See, e.g.
Applying inferential statistics for census data
Justifying the use of finite population correction
Adjusting any power analysis with FPC?
For some other references on the superpopulation approach, see: Korn and Graubard, 1999,p.227); Gelman, 2009; and a couple of unpublished notes by Ed Stanek (2000 a,b). The first paper contains an incomplete set of references.
References
Cochran, W. G. (1939). "The use of analysis of variance in enumeration by
sampling.,"Journal of the American Statistical Association,34:492-51
Cochran, W. G. (1977). Sampling techniques (3rd Ed.). New York: Wiley.
Deming, W Edwards, and Frederick F Stephan. (1941). On the interpretation of censuses as samples. Journal of the American Statistical Association 36, no. 213: 45-49
Deming, W. E. (1966). Some theory of sampling. New York: Dover Publications.
Andrew Gelman, 2009. How does statistical analysis differ when analyzing the entire population rather than a sample? http://andrewgelman.com/2009/07/03/how_does_statis/
Korn, E. L., & Graubard, B. I. (1999). Analysis of health surveys (Wiley series in probability and statistics). New York: Wiley.
Ed Stanek (2000a) Ideas on Superpopulation Models and Inference http://www.umass.edu/cluster/ed/unpublication/yr2000/c00ed62.PDF
Ed Stanek (2000b) Superpopulations and Superpopulation Models
http://www.umass.edu/cluster/ed/unpublication/yr2000/c00ed64v1.PDF
|
Is the accessible population a random sample?
This is an important question, made explicit by Deming and Stephan (1941), who used first the word "superpopulation" to describe the approach with that name: assume that the current population is its
|
44,196
|
Is the accessible population a random sample?
|
At its face value, this is a convenience sample. Real sampling involves randomization, and I don't think any university will allow you to randomly put students to sections. There's undoubtedly an issue of self-selection that produces biased samples with skewed prevalences of students with different backgrounds and characteristics. Only the more responsible students will take MWF 8:00am classes, and those who need to work during the day may prefer late night classes, etc.
|
Is the accessible population a random sample?
|
At its face value, this is a convenience sample. Real sampling involves randomization, and I don't think any university will allow you to randomly put students to sections. There's undoubtedly an issu
|
Is the accessible population a random sample?
At its face value, this is a convenience sample. Real sampling involves randomization, and I don't think any university will allow you to randomly put students to sections. There's undoubtedly an issue of self-selection that produces biased samples with skewed prevalences of students with different backgrounds and characteristics. Only the more responsible students will take MWF 8:00am classes, and those who need to work during the day may prefer late night classes, etc.
|
Is the accessible population a random sample?
At its face value, this is a convenience sample. Real sampling involves randomization, and I don't think any university will allow you to randomly put students to sections. There's undoubtedly an issu
|
44,197
|
What is the difference between various Kruskal-Wallis post-hoc tests?
|
Understanding how these test implementations differ requires understanding the actual test statistics themselves.
For example, dunn.test provides Dunn's (1964) z test approximation to a rank sum test employing both the same ranks used in the Kruskal-Wallis test, and the pooled variance estimate implied by the null hypothesis of the Kruskal-Wallis (akin to using the pooled variance to calculate t test statistics following an ANOVA).
By contrast, the Kruskal-Nemenyi test as implemented in posthoc.kruskal.nemenyi.test is based on either the Studentized range distribution, or the $\chi^{2}$ distribution depending on user choice.
The kruskalmc function in the pgirmess package implements Dunn's post hoc rank sum comparison using z test statistics as directed by Siegel and Castellan (1988), but these authors do not include Dunn's (1964) correction for ties, so kruskalmc will be less accurate than dunn.test when ties exist in the data.
It is difficult to discern from the documentation of kruskal whether the author is using the Conover-Iman t approximation to the distribution of rank sum differences (similar to Dunn's test, but requires that the Kruskal-Wallis be rejected, and is more powerful). A brief glance at the code does not immediately scream out Conover-Iman to me, however, it is quite possible that is an implementation of the test. More certainly implemented in R is the conover.test package.
The tl;dr: these all appear to be implementations of different test statistics or different forms of the same test statistic, so there is no reason to expect them to agree.
References
W. Jay Conover (1999) Practical Nonparametrics Statistics.
Conover, W. J. and Iman, R. L. (1979). On multiple-comparisons procedures. Technical Report LA-7677-MS, Los Alamos Scientific Laboratory.
Dunn, O. J. (1964). Multiple comparisons using rank sums. Technometrics, 6(3):241–252.
Siegel and Castellan (1988) Non parametric statistics for the behavioural sciences. MacGraw Hill Int., New York. pp 213-214
|
What is the difference between various Kruskal-Wallis post-hoc tests?
|
Understanding how these test implementations differ requires understanding the actual test statistics themselves.
For example, dunn.test provides Dunn's (1964) z test approximation to a rank sum test
|
What is the difference between various Kruskal-Wallis post-hoc tests?
Understanding how these test implementations differ requires understanding the actual test statistics themselves.
For example, dunn.test provides Dunn's (1964) z test approximation to a rank sum test employing both the same ranks used in the Kruskal-Wallis test, and the pooled variance estimate implied by the null hypothesis of the Kruskal-Wallis (akin to using the pooled variance to calculate t test statistics following an ANOVA).
By contrast, the Kruskal-Nemenyi test as implemented in posthoc.kruskal.nemenyi.test is based on either the Studentized range distribution, or the $\chi^{2}$ distribution depending on user choice.
The kruskalmc function in the pgirmess package implements Dunn's post hoc rank sum comparison using z test statistics as directed by Siegel and Castellan (1988), but these authors do not include Dunn's (1964) correction for ties, so kruskalmc will be less accurate than dunn.test when ties exist in the data.
It is difficult to discern from the documentation of kruskal whether the author is using the Conover-Iman t approximation to the distribution of rank sum differences (similar to Dunn's test, but requires that the Kruskal-Wallis be rejected, and is more powerful). A brief glance at the code does not immediately scream out Conover-Iman to me, however, it is quite possible that is an implementation of the test. More certainly implemented in R is the conover.test package.
The tl;dr: these all appear to be implementations of different test statistics or different forms of the same test statistic, so there is no reason to expect them to agree.
References
W. Jay Conover (1999) Practical Nonparametrics Statistics.
Conover, W. J. and Iman, R. L. (1979). On multiple-comparisons procedures. Technical Report LA-7677-MS, Los Alamos Scientific Laboratory.
Dunn, O. J. (1964). Multiple comparisons using rank sums. Technometrics, 6(3):241–252.
Siegel and Castellan (1988) Non parametric statistics for the behavioural sciences. MacGraw Hill Int., New York. pp 213-214
|
What is the difference between various Kruskal-Wallis post-hoc tests?
Understanding how these test implementations differ requires understanding the actual test statistics themselves.
For example, dunn.test provides Dunn's (1964) z test approximation to a rank sum test
|
44,198
|
What is the difference between various Kruskal-Wallis post-hoc tests?
|
I know that this thread is older, but I came across it because I was looking for answers about the post-hoc test applied to the Kruskal-Wallis test found in the agricolae package. I really needed to know for documentation purposes so I personally emailed the maintainer to ask what procedure is used for the post-hoc test. He told me that it does use a procedure from Conover. Here is his specific reply in reference to the Kruskal-Wallis post-hoc test:
The Kruskal test is nonparametric, but it is feasible to apply a function as the least significant difference on mean ranks, which can make an adjustment on probability, the procedure is a criterion with the critical range by Conover.
I hope this helps to answer the question regarding the Kruskal-Wallis test in agricolae.
|
What is the difference between various Kruskal-Wallis post-hoc tests?
|
I know that this thread is older, but I came across it because I was looking for answers about the post-hoc test applied to the Kruskal-Wallis test found in the agricolae package. I really needed to k
|
What is the difference between various Kruskal-Wallis post-hoc tests?
I know that this thread is older, but I came across it because I was looking for answers about the post-hoc test applied to the Kruskal-Wallis test found in the agricolae package. I really needed to know for documentation purposes so I personally emailed the maintainer to ask what procedure is used for the post-hoc test. He told me that it does use a procedure from Conover. Here is his specific reply in reference to the Kruskal-Wallis post-hoc test:
The Kruskal test is nonparametric, but it is feasible to apply a function as the least significant difference on mean ranks, which can make an adjustment on probability, the procedure is a criterion with the critical range by Conover.
I hope this helps to answer the question regarding the Kruskal-Wallis test in agricolae.
|
What is the difference between various Kruskal-Wallis post-hoc tests?
I know that this thread is older, but I came across it because I was looking for answers about the post-hoc test applied to the Kruskal-Wallis test found in the agricolae package. I really needed to k
|
44,199
|
What is the famous data set that looks totally different but has similar summary stats?
|
You must be thinking of Anscombe's quartet.
|
What is the famous data set that looks totally different but has similar summary stats?
|
You must be thinking of Anscombe's quartet.
|
What is the famous data set that looks totally different but has similar summary stats?
You must be thinking of Anscombe's quartet.
|
What is the famous data set that looks totally different but has similar summary stats?
You must be thinking of Anscombe's quartet.
|
44,200
|
What is the famous data set that looks totally different but has similar summary stats?
|
Anscombe quartet is the name (as said before), and its standard plots are below.
It was constructed in from Graphs in Statistical Analysis, The American Statistician, 1973. Since then, there have been attemps to reproduce or generalize it on a broader extend, for instance:
Generating Data with Identical Statistics but Dissimilar Graphics: A Follow up to the Anscombe Dataset, The American Statistician, 2007,
.Cloning data: generating datasets with exactly the same multiple linear regression fit, Australian and New Zealand Journal of Statistics, 2009.
|
What is the famous data set that looks totally different but has similar summary stats?
|
Anscombe quartet is the name (as said before), and its standard plots are below.
It was constructed in from Graphs in Statistical Analysis, The American Statistician, 1973. Since then, there have bee
|
What is the famous data set that looks totally different but has similar summary stats?
Anscombe quartet is the name (as said before), and its standard plots are below.
It was constructed in from Graphs in Statistical Analysis, The American Statistician, 1973. Since then, there have been attemps to reproduce or generalize it on a broader extend, for instance:
Generating Data with Identical Statistics but Dissimilar Graphics: A Follow up to the Anscombe Dataset, The American Statistician, 2007,
.Cloning data: generating datasets with exactly the same multiple linear regression fit, Australian and New Zealand Journal of Statistics, 2009.
|
What is the famous data set that looks totally different but has similar summary stats?
Anscombe quartet is the name (as said before), and its standard plots are below.
It was constructed in from Graphs in Statistical Analysis, The American Statistician, 1973. Since then, there have bee
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.