idx int64 1 56k | question stringlengths 15 155 | answer stringlengths 2 29.2k ⌀ | question_cut stringlengths 15 100 | answer_cut stringlengths 2 200 ⌀ | conversation stringlengths 47 29.3k | conversation_cut stringlengths 47 301 |
|---|---|---|---|---|---|---|
44,101 | What methods to use for statistical prediction/forecast of trading data? | In general, in the markets, a mechanical trading system that has out-performed in the past, tends to under-perform in the future. If you're really lucky, then previous out-performance is just statistical noise, and your future performance is in line with the market.
In short, to your question "How can I calculate the c... | What methods to use for statistical prediction/forecast of trading data? | In general, in the markets, a mechanical trading system that has out-performed in the past, tends to under-perform in the future. If you're really lucky, then previous out-performance is just statisti | What methods to use for statistical prediction/forecast of trading data?
In general, in the markets, a mechanical trading system that has out-performed in the past, tends to under-perform in the future. If you're really lucky, then previous out-performance is just statistical noise, and your future performance is in li... | What methods to use for statistical prediction/forecast of trading data?
In general, in the markets, a mechanical trading system that has out-performed in the past, tends to under-perform in the future. If you're really lucky, then previous out-performance is just statisti |
44,102 | Is it possible to do data analysis in Open Office Calc? | Yes, you can do statistics in Open Office Calc:
Here is a list of statistical functions in LibreOffice Calc
Possibly out of date suggestions
There is an add on called R and Calc (page last modified in 2008; ymmv) that allows the user to call R functions from within Open Office.
Calc's data analysis tool is under deve... | Is it possible to do data analysis in Open Office Calc? | Yes, you can do statistics in Open Office Calc:
Here is a list of statistical functions in LibreOffice Calc
Possibly out of date suggestions
There is an add on called R and Calc (page last modified | Is it possible to do data analysis in Open Office Calc?
Yes, you can do statistics in Open Office Calc:
Here is a list of statistical functions in LibreOffice Calc
Possibly out of date suggestions
There is an add on called R and Calc (page last modified in 2008; ymmv) that allows the user to call R functions from wit... | Is it possible to do data analysis in Open Office Calc?
Yes, you can do statistics in Open Office Calc:
Here is a list of statistical functions in LibreOffice Calc
Possibly out of date suggestions
There is an add on called R and Calc (page last modified |
44,103 | Is it possible to do data analysis in Open Office Calc? | Gnumeric
http://projects.gnome.org/gnumeric/
will do various statistical analyses. After installation they are found under Statistics in the top (File, Edit, etc.) menu. | Is it possible to do data analysis in Open Office Calc? | Gnumeric
http://projects.gnome.org/gnumeric/
will do various statistical analyses. After installation they are found under Statistics in the top (File, Edit, etc.) menu. | Is it possible to do data analysis in Open Office Calc?
Gnumeric
http://projects.gnome.org/gnumeric/
will do various statistical analyses. After installation they are found under Statistics in the top (File, Edit, etc.) menu. | Is it possible to do data analysis in Open Office Calc?
Gnumeric
http://projects.gnome.org/gnumeric/
will do various statistical analyses. After installation they are found under Statistics in the top (File, Edit, etc.) menu. |
44,104 | Is it possible to do data analysis in Open Office Calc? | First post here!
I've used this:
http://sourceforge.net/projects/ooomacros/files/OOo%20Statistics/
to do stats in openoffice (and recommended it to others as well).
I usually use R but sometimes a quick look is all you need.
best
i | Is it possible to do data analysis in Open Office Calc? | First post here!
I've used this:
http://sourceforge.net/projects/ooomacros/files/OOo%20Statistics/
to do stats in openoffice (and recommended it to others as well).
I usually use R but sometimes a qui | Is it possible to do data analysis in Open Office Calc?
First post here!
I've used this:
http://sourceforge.net/projects/ooomacros/files/OOo%20Statistics/
to do stats in openoffice (and recommended it to others as well).
I usually use R but sometimes a quick look is all you need.
best
i | Is it possible to do data analysis in Open Office Calc?
First post here!
I've used this:
http://sourceforge.net/projects/ooomacros/files/OOo%20Statistics/
to do stats in openoffice (and recommended it to others as well).
I usually use R but sometimes a qui |
44,105 | Is it possible to do data analysis in Open Office Calc? | Sofastats looks really well done, and it can import from OpenOffice files.
http://www.sofastatistics.com/ | Is it possible to do data analysis in Open Office Calc? | Sofastats looks really well done, and it can import from OpenOffice files.
http://www.sofastatistics.com/ | Is it possible to do data analysis in Open Office Calc?
Sofastats looks really well done, and it can import from OpenOffice files.
http://www.sofastatistics.com/ | Is it possible to do data analysis in Open Office Calc?
Sofastats looks really well done, and it can import from OpenOffice files.
http://www.sofastatistics.com/ |
44,106 | Factor dependent correlation | I agree with JMS advice, that the answer is totally context dependent.
But what you are looking at may also be considered a moderation effect.
In statistics, moderation occurs when
the relationship between two variables
depends on a third variable.
(quoted from wikipedia)
A moderation is statistically signific... | Factor dependent correlation | I agree with JMS advice, that the answer is totally context dependent.
But what you are looking at may also be considered a moderation effect.
In statistics, moderation occurs when
the relationshi | Factor dependent correlation
I agree with JMS advice, that the answer is totally context dependent.
But what you are looking at may also be considered a moderation effect.
In statistics, moderation occurs when
the relationship between two variables
depends on a third variable.
(quoted from wikipedia)
A moderat... | Factor dependent correlation
I agree with JMS advice, that the answer is totally context dependent.
But what you are looking at may also be considered a moderation effect.
In statistics, moderation occurs when
the relationshi |
44,107 | Factor dependent correlation | Are you familiar with Simpson's paradox? This would seem to be what you're observing here.
Edit: I didn't answer your question :) What exactly you should do is to some degree context dependent (Are the groups meaningful? Does this represent a problem in the study design? etc). At the very least you should report both r... | Factor dependent correlation | Are you familiar with Simpson's paradox? This would seem to be what you're observing here.
Edit: I didn't answer your question :) What exactly you should do is to some degree context dependent (Are th | Factor dependent correlation
Are you familiar with Simpson's paradox? This would seem to be what you're observing here.
Edit: I didn't answer your question :) What exactly you should do is to some degree context dependent (Are the groups meaningful? Does this represent a problem in the study design? etc). At the very l... | Factor dependent correlation
Are you familiar with Simpson's paradox? This would seem to be what you're observing here.
Edit: I didn't answer your question :) What exactly you should do is to some degree context dependent (Are th |
44,108 | Factor dependent correlation | The previous comments are all good, but with group sample sizes of 5, 7, and 11, I wouldn't trust any of their correlations as far as I could throw them. You'll need to give the overall r a wide confidence interval as well. btw Nice job on the graph. | Factor dependent correlation | The previous comments are all good, but with group sample sizes of 5, 7, and 11, I wouldn't trust any of their correlations as far as I could throw them. You'll need to give the overall r a wide conf | Factor dependent correlation
The previous comments are all good, but with group sample sizes of 5, 7, and 11, I wouldn't trust any of their correlations as far as I could throw them. You'll need to give the overall r a wide confidence interval as well. btw Nice job on the graph. | Factor dependent correlation
The previous comments are all good, but with group sample sizes of 5, 7, and 11, I wouldn't trust any of their correlations as far as I could throw them. You'll need to give the overall r a wide conf |
44,109 | zscore function in R [closed] | As the zscore function you are looking for can be found in the R.basic package made by Henrik Bengtsson, which cannot be found on CRAN. To install use:
install.packages(c("R.basic"), contriburl="http://www.braju.com/R/repos/")
See this similar topic for more details. | zscore function in R [closed] | As the zscore function you are looking for can be found in the R.basic package made by Henrik Bengtsson, which cannot be found on CRAN. To install use:
install.packages(c("R.basic"), contriburl="http: | zscore function in R [closed]
As the zscore function you are looking for can be found in the R.basic package made by Henrik Bengtsson, which cannot be found on CRAN. To install use:
install.packages(c("R.basic"), contriburl="http://www.braju.com/R/repos/")
See this similar topic for more details. | zscore function in R [closed]
As the zscore function you are looking for can be found in the R.basic package made by Henrik Bengtsson, which cannot be found on CRAN. To install use:
install.packages(c("R.basic"), contriburl="http: |
44,110 | zscore function in R [closed] | Also, the base R function scale() can be used to produce z-scores. See help(scale) | zscore function in R [closed] | Also, the base R function scale() can be used to produce z-scores. See help(scale) | zscore function in R [closed]
Also, the base R function scale() can be used to produce z-scores. See help(scale) | zscore function in R [closed]
Also, the base R function scale() can be used to produce z-scores. See help(scale) |
44,111 | zscore function in R [closed] | You can use the following function to calculate the z value:
zVal <- round(qnorm(1 - (1 - prob)/2), 2)
Example, for 90%:
> zVal <- round(qnorm(1 - (1 - 0.90)/2), 2)
> 1.64 | zscore function in R [closed] | You can use the following function to calculate the z value:
zVal <- round(qnorm(1 - (1 - prob)/2), 2)
Example, for 90%:
> zVal <- round(qnorm(1 - (1 - 0.90)/2), 2)
> 1.64 | zscore function in R [closed]
You can use the following function to calculate the z value:
zVal <- round(qnorm(1 - (1 - prob)/2), 2)
Example, for 90%:
> zVal <- round(qnorm(1 - (1 - 0.90)/2), 2)
> 1.64 | zscore function in R [closed]
You can use the following function to calculate the z value:
zVal <- round(qnorm(1 - (1 - prob)/2), 2)
Example, for 90%:
> zVal <- round(qnorm(1 - (1 - 0.90)/2), 2)
> 1.64 |
44,112 | R resources in non-English languages | In german:
A short introduction to R very short, covers only the basics of R programming
http://de.wikibooks.org/wiki/GNU_R teaches the basics of R programmming in detail and also contains some examples of producing graphics and statistics.
cran.r-project.org/doc/contrib/Sawitzki-Einfuehrung.pdf a lengthy introductio... | R resources in non-English languages | In german:
A short introduction to R very short, covers only the basics of R programming
http://de.wikibooks.org/wiki/GNU_R teaches the basics of R programmming in detail and also contains some exam | R resources in non-English languages
In german:
A short introduction to R very short, covers only the basics of R programming
http://de.wikibooks.org/wiki/GNU_R teaches the basics of R programmming in detail and also contains some examples of producing graphics and statistics.
cran.r-project.org/doc/contrib/Sawitzki-... | R resources in non-English languages
In german:
A short introduction to R very short, covers only the basics of R programming
http://de.wikibooks.org/wiki/GNU_R teaches the basics of R programmming in detail and also contains some exam |
44,113 | R resources in non-English languages | There doesn't appear to be much in Russian, but here is a couple of links:
http://herba.msu.ru/shipunov/software/r/r-ru.htm contains pointers to a number of Russian-language R resources;
http://voliadis.ru/taxonomy/term/18 is a blog with some R content. | R resources in non-English languages | There doesn't appear to be much in Russian, but here is a couple of links:
http://herba.msu.ru/shipunov/software/r/r-ru.htm contains pointers to a number of Russian-language R resources;
http://volia | R resources in non-English languages
There doesn't appear to be much in Russian, but here is a couple of links:
http://herba.msu.ru/shipunov/software/r/r-ru.htm contains pointers to a number of Russian-language R resources;
http://voliadis.ru/taxonomy/term/18 is a blog with some R content. | R resources in non-English languages
There doesn't appear to be much in Russian, but here is a couple of links:
http://herba.msu.ru/shipunov/software/r/r-ru.htm contains pointers to a number of Russian-language R resources;
http://volia |
44,114 | R resources in non-English languages | Some german blog entries:
http://www.schockwellenreiter.de/blog/tag/r/
and
http://markheckmann.wordpress.com/category/r-r-code/
edit: and one more:
http://wagezudenken.blogspot.com/ | R resources in non-English languages | Some german blog entries:
http://www.schockwellenreiter.de/blog/tag/r/
and
http://markheckmann.wordpress.com/category/r-r-code/
edit: and one more:
http://wagezudenken.blogspot.com/ | R resources in non-English languages
Some german blog entries:
http://www.schockwellenreiter.de/blog/tag/r/
and
http://markheckmann.wordpress.com/category/r-r-code/
edit: and one more:
http://wagezudenken.blogspot.com/ | R resources in non-English languages
Some german blog entries:
http://www.schockwellenreiter.de/blog/tag/r/
and
http://markheckmann.wordpress.com/category/r-r-code/
edit: and one more:
http://wagezudenken.blogspot.com/ |
44,115 | R resources in non-English languages | All RSS feeds I follow are in English actually, so I'll just point to tutorials available in French, or made by French researchers.
Apart from the Contributed Documentation on CRAN, I often browse the R website hosted at the bioinformatics lab in Lyon (France); it is mostly in French, but it also includes english mater... | R resources in non-English languages | All RSS feeds I follow are in English actually, so I'll just point to tutorials available in French, or made by French researchers.
Apart from the Contributed Documentation on CRAN, I often browse the | R resources in non-English languages
All RSS feeds I follow are in English actually, so I'll just point to tutorials available in French, or made by French researchers.
Apart from the Contributed Documentation on CRAN, I often browse the R website hosted at the bioinformatics lab in Lyon (France); it is mostly in Frenc... | R resources in non-English languages
All RSS feeds I follow are in English actually, so I'll just point to tutorials available in French, or made by French researchers.
Apart from the Contributed Documentation on CRAN, I often browse the |
44,116 | R resources in non-English languages | Here is a german blog with some posts on R:
http://blog.berndweiss.net/tag/r/
Recently started, with no posts on R yet, but focused on open data, is this blog:
http://blog.zeit.de/open-data | R resources in non-English languages | Here is a german blog with some posts on R:
http://blog.berndweiss.net/tag/r/
Recently started, with no posts on R yet, but focused on open data, is this blog:
http://blog.zeit.de/open-data | R resources in non-English languages
Here is a german blog with some posts on R:
http://blog.berndweiss.net/tag/r/
Recently started, with no posts on R yet, but focused on open data, is this blog:
http://blog.zeit.de/open-data | R resources in non-English languages
Here is a german blog with some posts on R:
http://blog.berndweiss.net/tag/r/
Recently started, with no posts on R yet, but focused on open data, is this blog:
http://blog.zeit.de/open-data |
44,117 | R resources in non-English languages | See the bottom 2 thirds of http://cran.fhcrc.org/other-docs.html (or other cran site). | R resources in non-English languages | See the bottom 2 thirds of http://cran.fhcrc.org/other-docs.html (or other cran site). | R resources in non-English languages
See the bottom 2 thirds of http://cran.fhcrc.org/other-docs.html (or other cran site). | R resources in non-English languages
See the bottom 2 thirds of http://cran.fhcrc.org/other-docs.html (or other cran site). |
44,118 | Why a sample of skewed normal distribution is not normal? | I was under the impression that if I randomly sample from a skewed normal distribution, the distribution of my sample would be normal based on central limit theorem
You are incorrect in your understanding of the central limit theorem (it is a pretty common misconception, as Dave pointed out). The CLT states that under... | Why a sample of skewed normal distribution is not normal? | I was under the impression that if I randomly sample from a skewed normal distribution, the distribution of my sample would be normal based on central limit theorem
You are incorrect in your understa | Why a sample of skewed normal distribution is not normal?
I was under the impression that if I randomly sample from a skewed normal distribution, the distribution of my sample would be normal based on central limit theorem
You are incorrect in your understanding of the central limit theorem (it is a pretty common misc... | Why a sample of skewed normal distribution is not normal?
I was under the impression that if I randomly sample from a skewed normal distribution, the distribution of my sample would be normal based on central limit theorem
You are incorrect in your understa |
44,119 | Why a sample of skewed normal distribution is not normal? | Consider this:
If you take as a sample the whole population (i.e. the very very large “sample”), then by some miracle your skewed population suddenly will be changed to a normal one? | Why a sample of skewed normal distribution is not normal? | Consider this:
If you take as a sample the whole population (i.e. the very very large “sample”), then by some miracle your skewed population suddenly will be changed to a normal one? | Why a sample of skewed normal distribution is not normal?
Consider this:
If you take as a sample the whole population (i.e. the very very large “sample”), then by some miracle your skewed population suddenly will be changed to a normal one? | Why a sample of skewed normal distribution is not normal?
Consider this:
If you take as a sample the whole population (i.e. the very very large “sample”), then by some miracle your skewed population suddenly will be changed to a normal one? |
44,120 | Deep Learning based time series forecasting | You can't meaningfully talk about DNNs or ARIMA being "better at time series forecasting". It depends enormously on what kind of series you are looking at: short vs. long series, many vs. few or only one related series, causal drivers or not etc.
Anyone who makes sweeping statements here is like a salesman who knows ex... | Deep Learning based time series forecasting | You can't meaningfully talk about DNNs or ARIMA being "better at time series forecasting". It depends enormously on what kind of series you are looking at: short vs. long series, many vs. few or only | Deep Learning based time series forecasting
You can't meaningfully talk about DNNs or ARIMA being "better at time series forecasting". It depends enormously on what kind of series you are looking at: short vs. long series, many vs. few or only one related series, causal drivers or not etc.
Anyone who makes sweeping sta... | Deep Learning based time series forecasting
You can't meaningfully talk about DNNs or ARIMA being "better at time series forecasting". It depends enormously on what kind of series you are looking at: short vs. long series, many vs. few or only |
44,121 | Deep Learning based time series forecasting | You might also consider the drivers behind 'Deep-learning-beats-all' trend you mention. Much of the hype around these techniques comes from the superiority of these methods in image recognition and natural language problems. These domains are defined by exceptionally large datasets (e.g. ImageNet > 14 million images, i... | Deep Learning based time series forecasting | You might also consider the drivers behind 'Deep-learning-beats-all' trend you mention. Much of the hype around these techniques comes from the superiority of these methods in image recognition and na | Deep Learning based time series forecasting
You might also consider the drivers behind 'Deep-learning-beats-all' trend you mention. Much of the hype around these techniques comes from the superiority of these methods in image recognition and natural language problems. These domains are defined by exceptionally large da... | Deep Learning based time series forecasting
You might also consider the drivers behind 'Deep-learning-beats-all' trend you mention. Much of the hype around these techniques comes from the superiority of these methods in image recognition and na |
44,122 | Deep Learning based time series forecasting | Is the reason behind the result from the fact that DNN algorithms require a large-sized data?
There are parallels to time-series and tabular data. A recent work Tabular Data: Deep Learning is Not All You Need shows similar trends that DNN does not out perform conventional models on tabular data. However, it is right t... | Deep Learning based time series forecasting | Is the reason behind the result from the fact that DNN algorithms require a large-sized data?
There are parallels to time-series and tabular data. A recent work Tabular Data: Deep Learning is Not All | Deep Learning based time series forecasting
Is the reason behind the result from the fact that DNN algorithms require a large-sized data?
There are parallels to time-series and tabular data. A recent work Tabular Data: Deep Learning is Not All You Need shows similar trends that DNN does not out perform conventional mo... | Deep Learning based time series forecasting
Is the reason behind the result from the fact that DNN algorithms require a large-sized data?
There are parallels to time-series and tabular data. A recent work Tabular Data: Deep Learning is Not All |
44,123 | Deep Learning based time series forecasting | Statitical tools applied to time series forecasting are very developed and approach-oriented methods. You find many technics arima sarima sarimax var varimax vecm.... And each method had been developed for a particular situation and type of data and serie.
In the other hand DNN such as RNN, LSTM.. Are challenging mode... | Deep Learning based time series forecasting | Statitical tools applied to time series forecasting are very developed and approach-oriented methods. You find many technics arima sarima sarimax var varimax vecm.... And each method had been develope | Deep Learning based time series forecasting
Statitical tools applied to time series forecasting are very developed and approach-oriented methods. You find many technics arima sarima sarimax var varimax vecm.... And each method had been developed for a particular situation and type of data and serie.
In the other hand ... | Deep Learning based time series forecasting
Statitical tools applied to time series forecasting are very developed and approach-oriented methods. You find many technics arima sarima sarimax var varimax vecm.... And each method had been develope |
44,124 | theoretical basis for logistic regression | There is no theoretical basis for logistic regression (in general as a choice vs. another model). Two things are arbitrary:
summing the influences of each variables, each influence being proportional to the variable (linear predictor)
the sigmoid link (logit)
The first assumption is similar to linear regression: a s... | theoretical basis for logistic regression | There is no theoretical basis for logistic regression (in general as a choice vs. another model). Two things are arbitrary:
summing the influences of each variables, each influence being proportiona | theoretical basis for logistic regression
There is no theoretical basis for logistic regression (in general as a choice vs. another model). Two things are arbitrary:
summing the influences of each variables, each influence being proportional to the variable (linear predictor)
the sigmoid link (logit)
The first assum... | theoretical basis for logistic regression
There is no theoretical basis for logistic regression (in general as a choice vs. another model). Two things are arbitrary:
summing the influences of each variables, each influence being proportiona |
44,125 | theoretical basis for logistic regression | No, it isn't necessary. Economists like to get elbow-deep in mathematical hypotheses of how people make decisions, hence their frequent invocation of utility theory. But logistic regression can be justified in statistical terms without reference to utility, using the idea that a unit change in a predictor relates to an... | theoretical basis for logistic regression | No, it isn't necessary. Economists like to get elbow-deep in mathematical hypotheses of how people make decisions, hence their frequent invocation of utility theory. But logistic regression can be jus | theoretical basis for logistic regression
No, it isn't necessary. Economists like to get elbow-deep in mathematical hypotheses of how people make decisions, hence their frequent invocation of utility theory. But logistic regression can be justified in statistical terms without reference to utility, using the idea that ... | theoretical basis for logistic regression
No, it isn't necessary. Economists like to get elbow-deep in mathematical hypotheses of how people make decisions, hence their frequent invocation of utility theory. But logistic regression can be jus |
44,126 | theoretical basis for logistic regression | You get an impression that economists do it because that's what they are forced to write in order to get published in microeconomics. Pure empirical studies are hard to publish.
However, this is changing, and not all economists do this. For instance, take a look at this work: "Analyzing the Risk of Mortgage Default." T... | theoretical basis for logistic regression | You get an impression that economists do it because that's what they are forced to write in order to get published in microeconomics. Pure empirical studies are hard to publish.
However, this is chang | theoretical basis for logistic regression
You get an impression that economists do it because that's what they are forced to write in order to get published in microeconomics. Pure empirical studies are hard to publish.
However, this is changing, and not all economists do this. For instance, take a look at this work: "... | theoretical basis for logistic regression
You get an impression that economists do it because that's what they are forced to write in order to get published in microeconomics. Pure empirical studies are hard to publish.
However, this is chang |
44,127 | theoretical basis for logistic regression | Other people have answered your question, let me explain a bit more the philosophy behind the different justfications for logit models.
The utility model used in economics is based on the grand idea to link general preference orderings over outcomes with the ordering of real numbers.
Less abstractly, what economists ha... | theoretical basis for logistic regression | Other people have answered your question, let me explain a bit more the philosophy behind the different justfications for logit models.
The utility model used in economics is based on the grand idea t | theoretical basis for logistic regression
Other people have answered your question, let me explain a bit more the philosophy behind the different justfications for logit models.
The utility model used in economics is based on the grand idea to link general preference orderings over outcomes with the ordering of real nu... | theoretical basis for logistic regression
Other people have answered your question, let me explain a bit more the philosophy behind the different justfications for logit models.
The utility model used in economics is based on the grand idea t |
44,128 | theoretical basis for logistic regression | I'm not an economist nor do I know much utility theory, but I actually think there is some theoretical justification for logistic regression - at least at a high level. In real life, aren't decisions closer to existing on a scale like 0-100% rather than 0/1 binary? The ability to get 'probabilities' out of a logistic m... | theoretical basis for logistic regression | I'm not an economist nor do I know much utility theory, but I actually think there is some theoretical justification for logistic regression - at least at a high level. In real life, aren't decisions | theoretical basis for logistic regression
I'm not an economist nor do I know much utility theory, but I actually think there is some theoretical justification for logistic regression - at least at a high level. In real life, aren't decisions closer to existing on a scale like 0-100% rather than 0/1 binary? The ability ... | theoretical basis for logistic regression
I'm not an economist nor do I know much utility theory, but I actually think there is some theoretical justification for logistic regression - at least at a high level. In real life, aren't decisions |
44,129 | Under what additional conditions does independence follow from zero correlation? | The statement that you are asking about has two parts:
If $X$ and $Y$ are independent, then $X$ and $Y$ are uncorrelated.
If $X$ and $Y$ are uncorrelated, then $X$ and $Y$ are independent.
Statement 1 is always true and imposes no additional constraints on $X$ and $Y$ other than what already has been assumed, viz. th... | Under what additional conditions does independence follow from zero correlation? | The statement that you are asking about has two parts:
If $X$ and $Y$ are independent, then $X$ and $Y$ are uncorrelated.
If $X$ and $Y$ are uncorrelated, then $X$ and $Y$ are independent.
Statement | Under what additional conditions does independence follow from zero correlation?
The statement that you are asking about has two parts:
If $X$ and $Y$ are independent, then $X$ and $Y$ are uncorrelated.
If $X$ and $Y$ are uncorrelated, then $X$ and $Y$ are independent.
Statement 1 is always true and imposes no additi... | Under what additional conditions does independence follow from zero correlation?
The statement that you are asking about has two parts:
If $X$ and $Y$ are independent, then $X$ and $Y$ are uncorrelated.
If $X$ and $Y$ are uncorrelated, then $X$ and $Y$ are independent.
Statement |
44,130 | Under what additional conditions does independence follow from zero correlation? | For a joint distribution function (CDF) constructed as follows
$$H_{X,Y}(x,y)=F_X(x)G_Y(y)\left[1+\alpha\big(1-F_X(x)\big)\big(1-G_Y(y)\big)\right],\;\;\; \alpha >1$$
where $F_X(x)$ and $G_Y(y)$ are any two marginal CDF's,
uncorrelatedness (zero covariance) is equivalent to independence.
This is the "Farlie-Gumbel-Mor... | Under what additional conditions does independence follow from zero correlation? | For a joint distribution function (CDF) constructed as follows
$$H_{X,Y}(x,y)=F_X(x)G_Y(y)\left[1+\alpha\big(1-F_X(x)\big)\big(1-G_Y(y)\big)\right],\;\;\; \alpha >1$$
where $F_X(x)$ and $G_Y(y)$ are a | Under what additional conditions does independence follow from zero correlation?
For a joint distribution function (CDF) constructed as follows
$$H_{X,Y}(x,y)=F_X(x)G_Y(y)\left[1+\alpha\big(1-F_X(x)\big)\big(1-G_Y(y)\big)\right],\;\;\; \alpha >1$$
where $F_X(x)$ and $G_Y(y)$ are any two marginal CDF's,
uncorrelatedness... | Under what additional conditions does independence follow from zero correlation?
For a joint distribution function (CDF) constructed as follows
$$H_{X,Y}(x,y)=F_X(x)G_Y(y)\left[1+\alpha\big(1-F_X(x)\big)\big(1-G_Y(y)\big)\right],\;\;\; \alpha >1$$
where $F_X(x)$ and $G_Y(y)$ are a |
44,131 | Under what additional conditions does independence follow from zero correlation? | The result is only guaranteed to hold when X and Y form a bivariate normal distribution. You will find this in most multivariate analysis texts as well as on some threads on this site. | Under what additional conditions does independence follow from zero correlation? | The result is only guaranteed to hold when X and Y form a bivariate normal distribution. You will find this in most multivariate analysis texts as well as on some threads on this site. | Under what additional conditions does independence follow from zero correlation?
The result is only guaranteed to hold when X and Y form a bivariate normal distribution. You will find this in most multivariate analysis texts as well as on some threads on this site. | Under what additional conditions does independence follow from zero correlation?
The result is only guaranteed to hold when X and Y form a bivariate normal distribution. You will find this in most multivariate analysis texts as well as on some threads on this site. |
44,132 | Proving that $x^TAx = tr(xx^TA)$? [closed] | A well-known property of traces (see Matrix Cookbook, 1.1 (16)) is that for any $A, B, C$, $\mbox{tr}(ABC) = \mbox{tr}(BCA)$.
Applying this to your case gives $\mbox{tr}(x x^T A) = \mbox{tr}(x^T A x)$. Note that the expression in the trace of the right hand side is a scalar. The trace of a scalar is the scalar itself. | Proving that $x^TAx = tr(xx^TA)$? [closed] | A well-known property of traces (see Matrix Cookbook, 1.1 (16)) is that for any $A, B, C$, $\mbox{tr}(ABC) = \mbox{tr}(BCA)$.
Applying this to your case gives $\mbox{tr}(x x^T A) = \mbox{tr}(x^T A x)$ | Proving that $x^TAx = tr(xx^TA)$? [closed]
A well-known property of traces (see Matrix Cookbook, 1.1 (16)) is that for any $A, B, C$, $\mbox{tr}(ABC) = \mbox{tr}(BCA)$.
Applying this to your case gives $\mbox{tr}(x x^T A) = \mbox{tr}(x^T A x)$. Note that the expression in the trace of the right hand side is a scalar. T... | Proving that $x^TAx = tr(xx^TA)$? [closed]
A well-known property of traces (see Matrix Cookbook, 1.1 (16)) is that for any $A, B, C$, $\mbox{tr}(ABC) = \mbox{tr}(BCA)$.
Applying this to your case gives $\mbox{tr}(x x^T A) = \mbox{tr}(x^T A x)$ |
44,133 | Proving that $x^TAx = tr(xx^TA)$? [closed] | Some guidance in the form of an outline of the steps
Note that $x^TAx$ is a scalar.
Use what you know about the trace and scalars to convert it to a trace.
Use properties of the trace to convert it to what you need. | Proving that $x^TAx = tr(xx^TA)$? [closed] | Some guidance in the form of an outline of the steps
Note that $x^TAx$ is a scalar.
Use what you know about the trace and scalars to convert it to a trace.
Use properties of the trace to convert it t | Proving that $x^TAx = tr(xx^TA)$? [closed]
Some guidance in the form of an outline of the steps
Note that $x^TAx$ is a scalar.
Use what you know about the trace and scalars to convert it to a trace.
Use properties of the trace to convert it to what you need. | Proving that $x^TAx = tr(xx^TA)$? [closed]
Some guidance in the form of an outline of the steps
Note that $x^TAx$ is a scalar.
Use what you know about the trace and scalars to convert it to a trace.
Use properties of the trace to convert it t |
44,134 | Proving that $x^TAx = tr(xx^TA)$? [closed] | Given $\mathrm a, \mathrm b \in \mathbb R^n$,
$$\mbox{tr} ( \, \mathrm a \mathrm b^\top ) = a_1 b_1 + a_2 b_2 + \cdots + a_n b_n = \mathrm a^\top \mathrm b$$
Thus,
$$\mbox{tr} (\mathrm x \mathrm x^\top \mathrm A) = \mbox{tr} (\mathrm x (\mathrm A^\top \mathrm x)^\top ) = \mathrm x^\top \mathrm A^\top \mathrm x = \mathr... | Proving that $x^TAx = tr(xx^TA)$? [closed] | Given $\mathrm a, \mathrm b \in \mathbb R^n$,
$$\mbox{tr} ( \, \mathrm a \mathrm b^\top ) = a_1 b_1 + a_2 b_2 + \cdots + a_n b_n = \mathrm a^\top \mathrm b$$
Thus,
$$\mbox{tr} (\mathrm x \mathrm x^\to | Proving that $x^TAx = tr(xx^TA)$? [closed]
Given $\mathrm a, \mathrm b \in \mathbb R^n$,
$$\mbox{tr} ( \, \mathrm a \mathrm b^\top ) = a_1 b_1 + a_2 b_2 + \cdots + a_n b_n = \mathrm a^\top \mathrm b$$
Thus,
$$\mbox{tr} (\mathrm x \mathrm x^\top \mathrm A) = \mbox{tr} (\mathrm x (\mathrm A^\top \mathrm x)^\top ) = \math... | Proving that $x^TAx = tr(xx^TA)$? [closed]
Given $\mathrm a, \mathrm b \in \mathbb R^n$,
$$\mbox{tr} ( \, \mathrm a \mathrm b^\top ) = a_1 b_1 + a_2 b_2 + \cdots + a_n b_n = \mathrm a^\top \mathrm b$$
Thus,
$$\mbox{tr} (\mathrm x \mathrm x^\to |
44,135 | Linear regression - is a model "useless" if $R^2$ is very small? | Although $R^{2} < 0.01$ is not usually very helpful, the value of a model has to also be judged by (1) the difficulty of the task and (2) whether one hopes to learn tendencies vs. predict responses for individual subjects. Some tasks, such as predicting how may days a patient will live, are very difficult and low $R^{... | Linear regression - is a model "useless" if $R^2$ is very small? | Although $R^{2} < 0.01$ is not usually very helpful, the value of a model has to also be judged by (1) the difficulty of the task and (2) whether one hopes to learn tendencies vs. predict responses fo | Linear regression - is a model "useless" if $R^2$ is very small?
Although $R^{2} < 0.01$ is not usually very helpful, the value of a model has to also be judged by (1) the difficulty of the task and (2) whether one hopes to learn tendencies vs. predict responses for individual subjects. Some tasks, such as predicting ... | Linear regression - is a model "useless" if $R^2$ is very small?
Although $R^{2} < 0.01$ is not usually very helpful, the value of a model has to also be judged by (1) the difficulty of the task and (2) whether one hopes to learn tendencies vs. predict responses fo |
44,136 | Linear regression - is a model "useless" if $R^2$ is very small? | Despite traditional negative attitude toward statistical models with low $R^2$, I would like to make two points: 1) "low" is a relative term - one model with a lower $R^2$ could be better (have better explanatory power or parsimony) and more useful (better reflect reality) than others with a higher $R^2$ values. Having... | Linear regression - is a model "useless" if $R^2$ is very small? | Despite traditional negative attitude toward statistical models with low $R^2$, I would like to make two points: 1) "low" is a relative term - one model with a lower $R^2$ could be better (have better | Linear regression - is a model "useless" if $R^2$ is very small?
Despite traditional negative attitude toward statistical models with low $R^2$, I would like to make two points: 1) "low" is a relative term - one model with a lower $R^2$ could be better (have better explanatory power or parsimony) and more useful (bette... | Linear regression - is a model "useless" if $R^2$ is very small?
Despite traditional negative attitude toward statistical models with low $R^2$, I would like to make two points: 1) "low" is a relative term - one model with a lower $R^2$ could be better (have better |
44,137 | Linear regression - is a model "useless" if $R^2$ is very small? | If your model is correctly specified and the appropriate conditions for your inference method are satisfied (e.g. i.i.d. Gaussian errors if you want to use a t-test), then you should be able to achieve your nominal type I error rate, regardless of n and regardless of $R^2$. (Though as a separate issue, a large sample s... | Linear regression - is a model "useless" if $R^2$ is very small? | If your model is correctly specified and the appropriate conditions for your inference method are satisfied (e.g. i.i.d. Gaussian errors if you want to use a t-test), then you should be able to achiev | Linear regression - is a model "useless" if $R^2$ is very small?
If your model is correctly specified and the appropriate conditions for your inference method are satisfied (e.g. i.i.d. Gaussian errors if you want to use a t-test), then you should be able to achieve your nominal type I error rate, regardless of n and r... | Linear regression - is a model "useless" if $R^2$ is very small?
If your model is correctly specified and the appropriate conditions for your inference method are satisfied (e.g. i.i.d. Gaussian errors if you want to use a t-test), then you should be able to achiev |
44,138 | Linear regression - is a model "useless" if $R^2$ is very small? | A model is useful if it allows you to better understand what is happening with your data/theory and if it is correctly computed. In some cases, when the criterion variable is determined by a huge number of causes, getting high $R^2$ is very difficult. | Linear regression - is a model "useless" if $R^2$ is very small? | A model is useful if it allows you to better understand what is happening with your data/theory and if it is correctly computed. In some cases, when the criterion variable is determined by a huge numb | Linear regression - is a model "useless" if $R^2$ is very small?
A model is useful if it allows you to better understand what is happening with your data/theory and if it is correctly computed. In some cases, when the criterion variable is determined by a huge number of causes, getting high $R^2$ is very difficult. | Linear regression - is a model "useless" if $R^2$ is very small?
A model is useful if it allows you to better understand what is happening with your data/theory and if it is correctly computed. In some cases, when the criterion variable is determined by a huge numb |
44,139 | Naive Bayes: Imbalanced Dataset in Real-time Scenario | To create a good model, the model has to be built on training data which is of the same "structure" as the data the model will applied later on. This is the one boring assumption which underlies all classification models.
So by using an balanced data set meanwhile the real world is not balanced, you have already intro... | Naive Bayes: Imbalanced Dataset in Real-time Scenario | To create a good model, the model has to be built on training data which is of the same "structure" as the data the model will applied later on. This is the one boring assumption which underlies all c | Naive Bayes: Imbalanced Dataset in Real-time Scenario
To create a good model, the model has to be built on training data which is of the same "structure" as the data the model will applied later on. This is the one boring assumption which underlies all classification models.
So by using an balanced data set meanwhile ... | Naive Bayes: Imbalanced Dataset in Real-time Scenario
To create a good model, the model has to be built on training data which is of the same "structure" as the data the model will applied later on. This is the one boring assumption which underlies all c |
44,140 | Naive Bayes: Imbalanced Dataset in Real-time Scenario | In the paper "Tackling the Poor Assumptions of Naive Bayes Text Classifiers" the authors deal with this problem, among others, which stem from the character of the naive bayes algorithm. Having highly skewed data leads to a bias in your weights, which causes the bad precision.
Concretely for the problem of skew data, w... | Naive Bayes: Imbalanced Dataset in Real-time Scenario | In the paper "Tackling the Poor Assumptions of Naive Bayes Text Classifiers" the authors deal with this problem, among others, which stem from the character of the naive bayes algorithm. Having highly | Naive Bayes: Imbalanced Dataset in Real-time Scenario
In the paper "Tackling the Poor Assumptions of Naive Bayes Text Classifiers" the authors deal with this problem, among others, which stem from the character of the naive bayes algorithm. Having highly skewed data leads to a bias in your weights, which causes the bad... | Naive Bayes: Imbalanced Dataset in Real-time Scenario
In the paper "Tackling the Poor Assumptions of Naive Bayes Text Classifiers" the authors deal with this problem, among others, which stem from the character of the naive bayes algorithm. Having highly |
44,141 | Naive Bayes: Imbalanced Dataset in Real-time Scenario | Any Bayesian classifier can be easily tweaked to incorporate knowledge about how often a particular class is expected. When you train a Bayesian classifier, two sets of parameters are learned:
P(C=c), the probability that an observation belongs to class C (the class prior probabilities)
P(F=f | C=c), the probability t... | Naive Bayes: Imbalanced Dataset in Real-time Scenario | Any Bayesian classifier can be easily tweaked to incorporate knowledge about how often a particular class is expected. When you train a Bayesian classifier, two sets of parameters are learned:
P(C=c) | Naive Bayes: Imbalanced Dataset in Real-time Scenario
Any Bayesian classifier can be easily tweaked to incorporate knowledge about how often a particular class is expected. When you train a Bayesian classifier, two sets of parameters are learned:
P(C=c), the probability that an observation belongs to class C (the clas... | Naive Bayes: Imbalanced Dataset in Real-time Scenario
Any Bayesian classifier can be easily tweaked to incorporate knowledge about how often a particular class is expected. When you train a Bayesian classifier, two sets of parameters are learned:
P(C=c) |
44,142 | interaction of categorical and continuous variables | In the scenario you describe least squares regression will allow you to tell a very straightforward story:
First of all, imagine that you have no dichotomous independent variable. So:
(1) $y_{i} = \beta_{0} + \beta_{1}x_{1i} + \varepsilon_{i}$
Your regression describes the relationship between your dependent variable $... | interaction of categorical and continuous variables | In the scenario you describe least squares regression will allow you to tell a very straightforward story:
First of all, imagine that you have no dichotomous independent variable. So:
(1) $y_{i} = \be | interaction of categorical and continuous variables
In the scenario you describe least squares regression will allow you to tell a very straightforward story:
First of all, imagine that you have no dichotomous independent variable. So:
(1) $y_{i} = \beta_{0} + \beta_{1}x_{1i} + \varepsilon_{i}$
Your regression describe... | interaction of categorical and continuous variables
In the scenario you describe least squares regression will allow you to tell a very straightforward story:
First of all, imagine that you have no dichotomous independent variable. So:
(1) $y_{i} = \be |
44,143 | interaction of categorical and continuous variables | @Alexis seems to cover the equations pretty well. Here's some example code in r:
set.seed(8);d8a=data.frame(x=rnorm(99),z=rbinom(99,1,.5)) #Data sim'd to fit the scenario
d8a$y=(d8a$x+rnorm(99,0,3))*(2*d8a$z-1) #Guarantees an interaction
summary(lm(y~scale(x)*factor(z),d8a)) #Fits a GLM wi... | interaction of categorical and continuous variables | @Alexis seems to cover the equations pretty well. Here's some example code in r:
set.seed(8);d8a=data.frame(x=rnorm(99),z=rbinom(99,1,.5)) #Data sim'd to fit the scenario
d8a$y=(d8a$x+rnorm(99,0,3) | interaction of categorical and continuous variables
@Alexis seems to cover the equations pretty well. Here's some example code in r:
set.seed(8);d8a=data.frame(x=rnorm(99),z=rbinom(99,1,.5)) #Data sim'd to fit the scenario
d8a$y=(d8a$x+rnorm(99,0,3))*(2*d8a$z-1) #Guarantees an interaction
summar... | interaction of categorical and continuous variables
@Alexis seems to cover the equations pretty well. Here's some example code in r:
set.seed(8);d8a=data.frame(x=rnorm(99),z=rbinom(99,1,.5)) #Data sim'd to fit the scenario
d8a$y=(d8a$x+rnorm(99,0,3) |
44,144 | What does "a distribution over distributions" mean? | Suppose there are boxes with chocolates, with some portion of dark and sweet chocolates. And you are interested in eating them (chocolates, not - boxes).
You pick at random one of the boxes. (Some kinds of boxes can be more common than others.) Then, you can pick at random one of the chocolates.
So you have a distribut... | What does "a distribution over distributions" mean? | Suppose there are boxes with chocolates, with some portion of dark and sweet chocolates. And you are interested in eating them (chocolates, not - boxes).
You pick at random one of the boxes. (Some kin | What does "a distribution over distributions" mean?
Suppose there are boxes with chocolates, with some portion of dark and sweet chocolates. And you are interested in eating them (chocolates, not - boxes).
You pick at random one of the boxes. (Some kinds of boxes can be more common than others.) Then, you can pick at r... | What does "a distribution over distributions" mean?
Suppose there are boxes with chocolates, with some portion of dark and sweet chocolates. And you are interested in eating them (chocolates, not - boxes).
You pick at random one of the boxes. (Some kin |
44,145 | What does "a distribution over distributions" mean? | Suppose we are going to play a game in which I will flip a coin. If the coin is heads (H) then you win, if the coin is tails (T) then I win. To figure out whether to play the game, you would like to know the probability of H, P(H), and the probability of tails, P(T).
We could write down these two probabilities in a lis... | What does "a distribution over distributions" mean? | Suppose we are going to play a game in which I will flip a coin. If the coin is heads (H) then you win, if the coin is tails (T) then I win. To figure out whether to play the game, you would like to k | What does "a distribution over distributions" mean?
Suppose we are going to play a game in which I will flip a coin. If the coin is heads (H) then you win, if the coin is tails (T) then I win. To figure out whether to play the game, you would like to know the probability of H, P(H), and the probability of tails, P(T).
... | What does "a distribution over distributions" mean?
Suppose we are going to play a game in which I will flip a coin. If the coin is heads (H) then you win, if the coin is tails (T) then I win. To figure out whether to play the game, you would like to k |
44,146 | Intuitive understanding of regularization | overfitting is always bad as it means you have done something to your model that means that it generalisation performance has become worse. This is less likely to happen when you have lots of data, and in such circumstances regularisation tends to be less helpful, but over-fitting is still something you don't want.
Th... | Intuitive understanding of regularization | overfitting is always bad as it means you have done something to your model that means that it generalisation performance has become worse. This is less likely to happen when you have lots of data, a | Intuitive understanding of regularization
overfitting is always bad as it means you have done something to your model that means that it generalisation performance has become worse. This is less likely to happen when you have lots of data, and in such circumstances regularisation tends to be less helpful, but over-fit... | Intuitive understanding of regularization
overfitting is always bad as it means you have done something to your model that means that it generalisation performance has become worse. This is less likely to happen when you have lots of data, a |
44,147 | Intuitive understanding of regularization | It depends on your model & the specificity of your data. For instance, fitting an unpruned decision tree will always lead to overfitting with even just a few variables. The same goes with parametric models, where a large number of parameters can lead to overfitting, even if there is a lot of data.
Eitherway, you sould ... | Intuitive understanding of regularization | It depends on your model & the specificity of your data. For instance, fitting an unpruned decision tree will always lead to overfitting with even just a few variables. The same goes with parametric m | Intuitive understanding of regularization
It depends on your model & the specificity of your data. For instance, fitting an unpruned decision tree will always lead to overfitting with even just a few variables. The same goes with parametric models, where a large number of parameters can lead to overfitting, even if the... | Intuitive understanding of regularization
It depends on your model & the specificity of your data. For instance, fitting an unpruned decision tree will always lead to overfitting with even just a few variables. The same goes with parametric m |
44,148 | Intuitive understanding of regularization | Is overfitting bad when we have really a lot of data?
Overfitting with lot of data is still overfitting and overfitting is bad.
I don't understand why "very large weights fit the training data very
well"?
I found an example in Deep Learning by Goodfellow(page 293):
Suppose we apply logistic regression to a proble... | Intuitive understanding of regularization | Is overfitting bad when we have really a lot of data?
Overfitting with lot of data is still overfitting and overfitting is bad.
I don't understand why "very large weights fit the training data very | Intuitive understanding of regularization
Is overfitting bad when we have really a lot of data?
Overfitting with lot of data is still overfitting and overfitting is bad.
I don't understand why "very large weights fit the training data very
well"?
I found an example in Deep Learning by Goodfellow(page 293):
Suppos... | Intuitive understanding of regularization
Is overfitting bad when we have really a lot of data?
Overfitting with lot of data is still overfitting and overfitting is bad.
I don't understand why "very large weights fit the training data very |
44,149 | Can the performance of a deterministic model be evaluated without an estimate of model uncertainty? | Model accuracy can be defined as the difference between the model prediction and truth expressed in terms of squared error. So model accuracy is $E([T_{model}-T]^2)$ However you don't know the true $T$. But you say the you have $T_{obs}$ and you know its error distribution. So based on your assumption $E([T_{obs}-T]^2... | Can the performance of a deterministic model be evaluated without an estimate of model uncertainty? | Model accuracy can be defined as the difference between the model prediction and truth expressed in terms of squared error. So model accuracy is $E([T_{model}-T]^2)$ However you don't know the true $ | Can the performance of a deterministic model be evaluated without an estimate of model uncertainty?
Model accuracy can be defined as the difference between the model prediction and truth expressed in terms of squared error. So model accuracy is $E([T_{model}-T]^2)$ However you don't know the true $T$. But you say the ... | Can the performance of a deterministic model be evaluated without an estimate of model uncertainty?
Model accuracy can be defined as the difference between the model prediction and truth expressed in terms of squared error. So model accuracy is $E([T_{model}-T]^2)$ However you don't know the true $ |
44,150 | Can the performance of a deterministic model be evaluated without an estimate of model uncertainty? | Given that there is $N(0, \sigma)$ error in your observation, then the likelihood of the observation $T_{obs}$ given measurements $x$ is $L(T_{obs}|x) = N(T_{obs}; g(x), \sigma)$. One would need multiple measurements and temperature observations to have an estimateof $\sigma$, e.g. maximum likelihood. This is the gist... | Can the performance of a deterministic model be evaluated without an estimate of model uncertainty? | Given that there is $N(0, \sigma)$ error in your observation, then the likelihood of the observation $T_{obs}$ given measurements $x$ is $L(T_{obs}|x) = N(T_{obs}; g(x), \sigma)$. One would need mult | Can the performance of a deterministic model be evaluated without an estimate of model uncertainty?
Given that there is $N(0, \sigma)$ error in your observation, then the likelihood of the observation $T_{obs}$ given measurements $x$ is $L(T_{obs}|x) = N(T_{obs}; g(x), \sigma)$. One would need multiple measurements an... | Can the performance of a deterministic model be evaluated without an estimate of model uncertainty?
Given that there is $N(0, \sigma)$ error in your observation, then the likelihood of the observation $T_{obs}$ given measurements $x$ is $L(T_{obs}|x) = N(T_{obs}; g(x), \sigma)$. One would need mult |
44,151 | Can the performance of a deterministic model be evaluated without an estimate of model uncertainty? | The state of the art in Meteorological forecasting is Ensemble Forecasting. This has only become possible in the last few years because of advances in computing power and the corresponding reduction of the cost of computing.
Ensemble forecasting tries to address the problem of how to get realistic probabilities from de... | Can the performance of a deterministic model be evaluated without an estimate of model uncertainty? | The state of the art in Meteorological forecasting is Ensemble Forecasting. This has only become possible in the last few years because of advances in computing power and the corresponding reduction o | Can the performance of a deterministic model be evaluated without an estimate of model uncertainty?
The state of the art in Meteorological forecasting is Ensemble Forecasting. This has only become possible in the last few years because of advances in computing power and the corresponding reduction of the cost of comput... | Can the performance of a deterministic model be evaluated without an estimate of model uncertainty?
The state of the art in Meteorological forecasting is Ensemble Forecasting. This has only become possible in the last few years because of advances in computing power and the corresponding reduction o |
44,152 | Can the performance of a deterministic model be evaluated without an estimate of model uncertainty? | Typically, statistical models (i.e., models of data) have a random component (also sometimes called a 'stochastic component'). For example, a model might be:
$$
Y=X\beta+\epsilon \\
\text{where }\epsilon\sim\mathcal{N}(0,\sigma^2)
$$
This example is a basic regression model. The $X\beta$ is called the structural com... | Can the performance of a deterministic model be evaluated without an estimate of model uncertainty? | Typically, statistical models (i.e., models of data) have a random component (also sometimes called a 'stochastic component'). For example, a model might be:
$$
Y=X\beta+\epsilon \\
\text{where }\ep | Can the performance of a deterministic model be evaluated without an estimate of model uncertainty?
Typically, statistical models (i.e., models of data) have a random component (also sometimes called a 'stochastic component'). For example, a model might be:
$$
Y=X\beta+\epsilon \\
\text{where }\epsilon\sim\mathcal{N}... | Can the performance of a deterministic model be evaluated without an estimate of model uncertainty?
Typically, statistical models (i.e., models of data) have a random component (also sometimes called a 'stochastic component'). For example, a model might be:
$$
Y=X\beta+\epsilon \\
\text{where }\ep |
44,153 | Interpreting main effect and interaction | In general, you should not base your model selection solely on statistical significance. Substantive meaning is more important.
In this particular case, you can graph the predicted values for males and females, with the x-axis being income and the y-axis the number of items bought, and a line for each gender.
@gung mak... | Interpreting main effect and interaction | In general, you should not base your model selection solely on statistical significance. Substantive meaning is more important.
In this particular case, you can graph the predicted values for males an | Interpreting main effect and interaction
In general, you should not base your model selection solely on statistical significance. Substantive meaning is more important.
In this particular case, you can graph the predicted values for males and females, with the x-axis being income and the y-axis the number of items boug... | Interpreting main effect and interaction
In general, you should not base your model selection solely on statistical significance. Substantive meaning is more important.
In this particular case, you can graph the predicted values for males an |
44,154 | Interpreting main effect and interaction | Your results suggest that there is no interaction--you simply have a main effect of X1. You could say something like, "The number of tubs of ice-cream people buy is related to their income. For instance, if person A's income is one unit higher than person B's income, person A typically buys $\beta_1$ more tubs of ice... | Interpreting main effect and interaction | Your results suggest that there is no interaction--you simply have a main effect of X1. You could say something like, "The number of tubs of ice-cream people buy is related to their income. For inst | Interpreting main effect and interaction
Your results suggest that there is no interaction--you simply have a main effect of X1. You could say something like, "The number of tubs of ice-cream people buy is related to their income. For instance, if person A's income is one unit higher than person B's income, person A ... | Interpreting main effect and interaction
Your results suggest that there is no interaction--you simply have a main effect of X1. You could say something like, "The number of tubs of ice-cream people buy is related to their income. For inst |
44,155 | What does "20/ln(2)" mean in logistic regression? | This is a common scaling factor used for credit scoring models built with logistic regression.
The interpretation of the dependent variable in logistic regression is as log odds, but in credit scoring, we like to deal in points, thus a scaling factor is applied to the log odds to convert to the point system.
A widely u... | What does "20/ln(2)" mean in logistic regression? | This is a common scaling factor used for credit scoring models built with logistic regression.
The interpretation of the dependent variable in logistic regression is as log odds, but in credit scoring | What does "20/ln(2)" mean in logistic regression?
This is a common scaling factor used for credit scoring models built with logistic regression.
The interpretation of the dependent variable in logistic regression is as log odds, but in credit scoring, we like to deal in points, thus a scaling factor is applied to the l... | What does "20/ln(2)" mean in logistic regression?
This is a common scaling factor used for credit scoring models built with logistic regression.
The interpretation of the dependent variable in logistic regression is as log odds, but in credit scoring |
44,156 | What does "20/ln(2)" mean in logistic regression? | Typically in credit scoring one would choose a baseline score e.g. 600. We assign a certain meaning to 600 for example, 600 means the good bad odd is 30:1 (where bad typically means a default, the default definition is typically 90 days past payment due on the loan, however the bad definition can vary). Typically they ... | What does "20/ln(2)" mean in logistic regression? | Typically in credit scoring one would choose a baseline score e.g. 600. We assign a certain meaning to 600 for example, 600 means the good bad odd is 30:1 (where bad typically means a default, the def | What does "20/ln(2)" mean in logistic regression?
Typically in credit scoring one would choose a baseline score e.g. 600. We assign a certain meaning to 600 for example, 600 means the good bad odd is 30:1 (where bad typically means a default, the default definition is typically 90 days past payment due on the loan, how... | What does "20/ln(2)" mean in logistic regression?
Typically in credit scoring one would choose a baseline score e.g. 600. We assign a certain meaning to 600 for example, 600 means the good bad odd is 30:1 (where bad typically means a default, the def |
44,157 | What does "20/ln(2)" mean in logistic regression? | As for me all this theory wasn't that obvious I provide code with formulas to explain how all the "definitions" are translated into the resulting score.
import pandas as pd
import numpy as np
df=pd.DataFrame()
df['fc']=[206, 205, 200, 220, 230, 235, 236, 240,250]
df['cat']=[0, 1, 0, 0, 0, 1, 1, 1,0]
df['good']=[0, 1,... | What does "20/ln(2)" mean in logistic regression? | As for me all this theory wasn't that obvious I provide code with formulas to explain how all the "definitions" are translated into the resulting score.
import pandas as pd
import numpy as np
df=pd.D | What does "20/ln(2)" mean in logistic regression?
As for me all this theory wasn't that obvious I provide code with formulas to explain how all the "definitions" are translated into the resulting score.
import pandas as pd
import numpy as np
df=pd.DataFrame()
df['fc']=[206, 205, 200, 220, 230, 235, 236, 240,250]
df['... | What does "20/ln(2)" mean in logistic regression?
As for me all this theory wasn't that obvious I provide code with formulas to explain how all the "definitions" are translated into the resulting score.
import pandas as pd
import numpy as np
df=pd.D |
44,158 | Small sample linear regression: Where to start | I'd probably take a look at a ridge regression or, better, the lasso. These techniques are often used when there is multicollinearity. There are several options for doing this in R: See the Regularized and Shrinkage Methods section of the Machine Learning & Statistical Learning Task View on CRAN.
You don't have enough ... | Small sample linear regression: Where to start | I'd probably take a look at a ridge regression or, better, the lasso. These techniques are often used when there is multicollinearity. There are several options for doing this in R: See the Regularize | Small sample linear regression: Where to start
I'd probably take a look at a ridge regression or, better, the lasso. These techniques are often used when there is multicollinearity. There are several options for doing this in R: See the Regularized and Shrinkage Methods section of the Machine Learning & Statistical Lea... | Small sample linear regression: Where to start
I'd probably take a look at a ridge regression or, better, the lasso. These techniques are often used when there is multicollinearity. There are several options for doing this in R: See the Regularize |
44,159 | Small sample linear regression: Where to start | It seems to me that the only thing worth doing here is testing a very focussed hypothesis, if you have one. But it seems like you don't.
With so few cases and so many variables, anything else would (in my opinion) be a fishing expedition. That could be a bit useful, perhaps, to generate an hypothesis to test with new ... | Small sample linear regression: Where to start | It seems to me that the only thing worth doing here is testing a very focussed hypothesis, if you have one. But it seems like you don't.
With so few cases and so many variables, anything else would ( | Small sample linear regression: Where to start
It seems to me that the only thing worth doing here is testing a very focussed hypothesis, if you have one. But it seems like you don't.
With so few cases and so many variables, anything else would (in my opinion) be a fishing expedition. That could be a bit useful, perha... | Small sample linear regression: Where to start
It seems to me that the only thing worth doing here is testing a very focussed hypothesis, if you have one. But it seems like you don't.
With so few cases and so many variables, anything else would ( |
44,160 | Small sample linear regression: Where to start | I find @ucfagls's idea most appropriate here, since you have very few observations and a lot of variables. Ridge regression should do its job for prediction purpose.
Another way to analyse the data would be to rely on PLS regression (in this case, PLS1), which bears some idea with regression on PCA scores but seems mor... | Small sample linear regression: Where to start | I find @ucfagls's idea most appropriate here, since you have very few observations and a lot of variables. Ridge regression should do its job for prediction purpose.
Another way to analyse the data wo | Small sample linear regression: Where to start
I find @ucfagls's idea most appropriate here, since you have very few observations and a lot of variables. Ridge regression should do its job for prediction purpose.
Another way to analyse the data would be to rely on PLS regression (in this case, PLS1), which bears some i... | Small sample linear regression: Where to start
I find @ucfagls's idea most appropriate here, since you have very few observations and a lot of variables. Ridge regression should do its job for prediction purpose.
Another way to analyse the data wo |
44,161 | Small sample linear regression: Where to start | If you're frustrated with too many correlations, and since you already have your covariance matrix (well almost) you could do a principal components analysis. You'll end up with fewer dimensions, which is probably fine considering your data set size, and what you end up with won't be intercorrelated anymore. | Small sample linear regression: Where to start | If you're frustrated with too many correlations, and since you already have your covariance matrix (well almost) you could do a principal components analysis. You'll end up with fewer dimensions, whic | Small sample linear regression: Where to start
If you're frustrated with too many correlations, and since you already have your covariance matrix (well almost) you could do a principal components analysis. You'll end up with fewer dimensions, which is probably fine considering your data set size, and what you end up wi... | Small sample linear regression: Where to start
If you're frustrated with too many correlations, and since you already have your covariance matrix (well almost) you could do a principal components analysis. You'll end up with fewer dimensions, whic |
44,162 | Does higher variance usually mean lower probability density? | Up to a point. Because the density integrates to 1, the typical value of the density will be higher if the distribution has a lower variance and lower if it has a higher variance. For example, the maximum density of a Normal distribution with variance $\sigma^2$ is $1/(\sigma\sqrt{2\pi}$, which gets lower as $\sigma$ ... | Does higher variance usually mean lower probability density? | Up to a point. Because the density integrates to 1, the typical value of the density will be higher if the distribution has a lower variance and lower if it has a higher variance. For example, the ma | Does higher variance usually mean lower probability density?
Up to a point. Because the density integrates to 1, the typical value of the density will be higher if the distribution has a lower variance and lower if it has a higher variance. For example, the maximum density of a Normal distribution with variance $\sigm... | Does higher variance usually mean lower probability density?
Up to a point. Because the density integrates to 1, the typical value of the density will be higher if the distribution has a lower variance and lower if it has a higher variance. For example, the ma |
44,163 | Does higher variance usually mean lower probability density? | Standard deviation and probability density are exactly inversely correlated in one common and important case: scaled distributions, i.e. the distributions of $a\cdot X$ for different $a$ and same random variable $X$. Thomas' answer has a great example of this.
In the more general case, there does not have to be any rel... | Does higher variance usually mean lower probability density? | Standard deviation and probability density are exactly inversely correlated in one common and important case: scaled distributions, i.e. the distributions of $a\cdot X$ for different $a$ and same rand | Does higher variance usually mean lower probability density?
Standard deviation and probability density are exactly inversely correlated in one common and important case: scaled distributions, i.e. the distributions of $a\cdot X$ for different $a$ and same random variable $X$. Thomas' answer has a great example of this... | Does higher variance usually mean lower probability density?
Standard deviation and probability density are exactly inversely correlated in one common and important case: scaled distributions, i.e. the distributions of $a\cdot X$ for different $a$ and same rand |
44,164 | Does higher variance usually mean lower probability density? | Response to updated question: If I sample 100 data points from two distributions of the same type, but one with a lower variance and one with a higher variance, would the former one have higher likelihood?
The likelihood $L(\theta|X)$ depends on both the parameter $\theta$ and the random sample $X$, so it's hard to kno... | Does higher variance usually mean lower probability density? | Response to updated question: If I sample 100 data points from two distributions of the same type, but one with a lower variance and one with a higher variance, would the former one have higher likeli | Does higher variance usually mean lower probability density?
Response to updated question: If I sample 100 data points from two distributions of the same type, but one with a lower variance and one with a higher variance, would the former one have higher likelihood?
The likelihood $L(\theta|X)$ depends on both the para... | Does higher variance usually mean lower probability density?
Response to updated question: If I sample 100 data points from two distributions of the same type, but one with a lower variance and one with a higher variance, would the former one have higher likeli |
44,165 | Does higher variance usually mean lower probability density? | "Probability density" is not a single number that can be lower or higher. It's a function $p(x)$ of $x$. Usually it also depends on some additional parameters $q_i$. For example, in the normal distribution, we have $q_1 = \mu$ and $q_2 = \sigma$.
To change variance, you would change these parameters. You could also cha... | Does higher variance usually mean lower probability density? | "Probability density" is not a single number that can be lower or higher. It's a function $p(x)$ of $x$. Usually it also depends on some additional parameters $q_i$. For example, in the normal distrib | Does higher variance usually mean lower probability density?
"Probability density" is not a single number that can be lower or higher. It's a function $p(x)$ of $x$. Usually it also depends on some additional parameters $q_i$. For example, in the normal distribution, we have $q_1 = \mu$ and $q_2 = \sigma$.
To change va... | Does higher variance usually mean lower probability density?
"Probability density" is not a single number that can be lower or higher. It's a function $p(x)$ of $x$. Usually it also depends on some additional parameters $q_i$. For example, in the normal distrib |
44,166 | Is it worthwhile to use LASSO for variable selection even if it is cumbersome? | Variable selection is not compulsory. The idea that you have to throw away variables is wrong. Actually, unless there are strong reasons to throw away variables, don't do it! Use the full model and use the p-values to guide interpretation rather than throwing away information. An insignificant p-value doesn't mean that... | Is it worthwhile to use LASSO for variable selection even if it is cumbersome? | Variable selection is not compulsory. The idea that you have to throw away variables is wrong. Actually, unless there are strong reasons to throw away variables, don't do it! Use the full model and us | Is it worthwhile to use LASSO for variable selection even if it is cumbersome?
Variable selection is not compulsory. The idea that you have to throw away variables is wrong. Actually, unless there are strong reasons to throw away variables, don't do it! Use the full model and use the p-values to guide interpretation ra... | Is it worthwhile to use LASSO for variable selection even if it is cumbersome?
Variable selection is not compulsory. The idea that you have to throw away variables is wrong. Actually, unless there are strong reasons to throw away variables, don't do it! Use the full model and us |
44,167 | Is it worthwhile to use LASSO for variable selection even if it is cumbersome? | My goal is to understand which variables out of 10 actually explain my dependent variable for a voter/election research.
This is not variable selection!
With variable selection the goal is to reduce the size of a model
Such that it is easier to work with, like less computation intensive or requires less future effort... | Is it worthwhile to use LASSO for variable selection even if it is cumbersome? | My goal is to understand which variables out of 10 actually explain my dependent variable for a voter/election research.
This is not variable selection!
With variable selection the goal is to reduce | Is it worthwhile to use LASSO for variable selection even if it is cumbersome?
My goal is to understand which variables out of 10 actually explain my dependent variable for a voter/election research.
This is not variable selection!
With variable selection the goal is to reduce the size of a model
Such that it is easi... | Is it worthwhile to use LASSO for variable selection even if it is cumbersome?
My goal is to understand which variables out of 10 actually explain my dependent variable for a voter/election research.
This is not variable selection!
With variable selection the goal is to reduce |
44,168 | Is it worthwhile to use LASSO for variable selection even if it is cumbersome? | If its only the Feature Importance Aspect you are looking for , you can use RFE Method (Recursive Feature Elimination) as well.
But Lasso Regression takes care of Model not reaching Overfit or Underfit Situations as well as Feature Importance, by reducing all the Useless Features to Zero.
You can use Lasso Regression u... | Is it worthwhile to use LASSO for variable selection even if it is cumbersome? | If its only the Feature Importance Aspect you are looking for , you can use RFE Method (Recursive Feature Elimination) as well.
But Lasso Regression takes care of Model not reaching Overfit or Underfi | Is it worthwhile to use LASSO for variable selection even if it is cumbersome?
If its only the Feature Importance Aspect you are looking for , you can use RFE Method (Recursive Feature Elimination) as well.
But Lasso Regression takes care of Model not reaching Overfit or Underfit Situations as well as Feature Importanc... | Is it worthwhile to use LASSO for variable selection even if it is cumbersome?
If its only the Feature Importance Aspect you are looking for , you can use RFE Method (Recursive Feature Elimination) as well.
But Lasso Regression takes care of Model not reaching Overfit or Underfi |
44,169 | Is it worthwhile to use LASSO for variable selection even if it is cumbersome? | For any others that might be interested in the answer:
I managed to do the LASSO with an excel plugin into excel 365.
It did show significantly different results in comparison to stepwise.
Also, I found out that I actually want to build a causal model not a variable selection. (different terminology).
So yes LASSO seem... | Is it worthwhile to use LASSO for variable selection even if it is cumbersome? | For any others that might be interested in the answer:
I managed to do the LASSO with an excel plugin into excel 365.
It did show significantly different results in comparison to stepwise.
Also, I fou | Is it worthwhile to use LASSO for variable selection even if it is cumbersome?
For any others that might be interested in the answer:
I managed to do the LASSO with an excel plugin into excel 365.
It did show significantly different results in comparison to stepwise.
Also, I found out that I actually want to build a ca... | Is it worthwhile to use LASSO for variable selection even if it is cumbersome?
For any others that might be interested in the answer:
I managed to do the LASSO with an excel plugin into excel 365.
It did show significantly different results in comparison to stepwise.
Also, I fou |
44,170 | Can we just "pre-test" the backdoor criterion? | Indeed, given the DAG, you should only see a correlation between X and Z if there is a direct link between the two, and thus you could test for a correlation directly. These and similar tests are done by all causal discovery algorithms that automatically create the DAG from data, such as the PC algorithm.
However, from... | Can we just "pre-test" the backdoor criterion? | Indeed, given the DAG, you should only see a correlation between X and Z if there is a direct link between the two, and thus you could test for a correlation directly. These and similar tests are done | Can we just "pre-test" the backdoor criterion?
Indeed, given the DAG, you should only see a correlation between X and Z if there is a direct link between the two, and thus you could test for a correlation directly. These and similar tests are done by all causal discovery algorithms that automatically create the DAG fro... | Can we just "pre-test" the backdoor criterion?
Indeed, given the DAG, you should only see a correlation between X and Z if there is a direct link between the two, and thus you could test for a correlation directly. These and similar tests are done |
44,171 | Can we just "pre-test" the backdoor criterion? | Yes, in order to confirm a confounding relationship you may perform a regression (or Chi-squared test or other suitable model) of $Z$ on $X$ and $Z$ on $Y$. This is exactly what I'm doing right now for a difference-in-differences healthcare analysis.
There may be confounders or colliders, measured or unmeasured, associ... | Can we just "pre-test" the backdoor criterion? | Yes, in order to confirm a confounding relationship you may perform a regression (or Chi-squared test or other suitable model) of $Z$ on $X$ and $Z$ on $Y$. This is exactly what I'm doing right now fo | Can we just "pre-test" the backdoor criterion?
Yes, in order to confirm a confounding relationship you may perform a regression (or Chi-squared test or other suitable model) of $Z$ on $X$ and $Z$ on $Y$. This is exactly what I'm doing right now for a difference-in-differences healthcare analysis.
There may be confounde... | Can we just "pre-test" the backdoor criterion?
Yes, in order to confirm a confounding relationship you may perform a regression (or Chi-squared test or other suitable model) of $Z$ on $X$ and $Z$ on $Y$. This is exactly what I'm doing right now fo |
44,172 | Alternative formula for the Bernoulli pmf? | Your alternative form is often written braced form as
$$
f(x)=\begin{cases} p & \text{if $x=1$} \\
1-p & \text{if $x=0$}
\end{cases} $$
and there is nothing wrong with that. It might be useful, for instance, for programming and for elementary exposition.
But if you want to do any form of algebr... | Alternative formula for the Bernoulli pmf? | Your alternative form is often written braced form as
$$
f(x)=\begin{cases} p & \text{if $x=1$} \\
1-p & \text{if $x=0$}
\end{cases} $$
and there is nothing wrong with that. I | Alternative formula for the Bernoulli pmf?
Your alternative form is often written braced form as
$$
f(x)=\begin{cases} p & \text{if $x=1$} \\
1-p & \text{if $x=0$}
\end{cases} $$
and there is nothing wrong with that. It might be useful, for instance, for programming and for elementary expositio... | Alternative formula for the Bernoulli pmf?
Your alternative form is often written braced form as
$$
f(x)=\begin{cases} p & \text{if $x=1$} \\
1-p & \text{if $x=0$}
\end{cases} $$
and there is nothing wrong with that. I |
44,173 | Alternative formula for the Bernoulli pmf? | There's nothing wrong with it as it evaluates the values, it should evaluate. The usual formulation however uses powers so it becomes a case of binomial distribution with $n=1$ sample size. Recall that the probability mass function of binomial distribution is
$$
{n \choose x} \,p^x (1-p)^{n-x}
$$
Where we have $n$ inde... | Alternative formula for the Bernoulli pmf? | There's nothing wrong with it as it evaluates the values, it should evaluate. The usual formulation however uses powers so it becomes a case of binomial distribution with $n=1$ sample size. Recall tha | Alternative formula for the Bernoulli pmf?
There's nothing wrong with it as it evaluates the values, it should evaluate. The usual formulation however uses powers so it becomes a case of binomial distribution with $n=1$ sample size. Recall that the probability mass function of binomial distribution is
$$
{n \choose x} ... | Alternative formula for the Bernoulli pmf?
There's nothing wrong with it as it evaluates the values, it should evaluate. The usual formulation however uses powers so it becomes a case of binomial distribution with $n=1$ sample size. Recall tha |
44,174 | Alternative formula for the Bernoulli pmf? | This is fine, assuming that the domain of $f$ is $\{0,1\}$.
This is also true of the formulations in the other answers.
A different formulation involving the Iverson bracket is
\begin{align*}
f(x) = (1-p)[x=0]+p[x=1].\tag{1}
\end{align*}
One defines
\begin{align*}
[P] &= \begin{cases}
1, & \textrm{if $P$ is true}, \\
0... | Alternative formula for the Bernoulli pmf? | This is fine, assuming that the domain of $f$ is $\{0,1\}$.
This is also true of the formulations in the other answers.
A different formulation involving the Iverson bracket is
\begin{align*}
f(x) = ( | Alternative formula for the Bernoulli pmf?
This is fine, assuming that the domain of $f$ is $\{0,1\}$.
This is also true of the formulations in the other answers.
A different formulation involving the Iverson bracket is
\begin{align*}
f(x) = (1-p)[x=0]+p[x=1].\tag{1}
\end{align*}
One defines
\begin{align*}
[P] &= \begi... | Alternative formula for the Bernoulli pmf?
This is fine, assuming that the domain of $f$ is $\{0,1\}$.
This is also true of the formulations in the other answers.
A different formulation involving the Iverson bracket is
\begin{align*}
f(x) = ( |
44,175 | A description of the mean of the Geometric Distribution - is it unorthodox or just incorrect? | $\exp(\mathbb E[\log(X)])$
is the geometric mean of a positive random variable $X$
not the mean of a geometric random variable.
So either the homework directions put the words in the wrong order, or you transcribed them incorrectly | A description of the mean of the Geometric Distribution - is it unorthodox or just incorrect? | $\exp(\mathbb E[\log(X)])$
is the geometric mean of a positive random variable $X$
not the mean of a geometric random variable.
So either the homework directions put the words in the wrong order, or y | A description of the mean of the Geometric Distribution - is it unorthodox or just incorrect?
$\exp(\mathbb E[\log(X)])$
is the geometric mean of a positive random variable $X$
not the mean of a geometric random variable.
So either the homework directions put the words in the wrong order, or you transcribed them incorr... | A description of the mean of the Geometric Distribution - is it unorthodox or just incorrect?
$\exp(\mathbb E[\log(X)])$
is the geometric mean of a positive random variable $X$
not the mean of a geometric random variable.
So either the homework directions put the words in the wrong order, or y |
44,176 | How to answer critiques about the inapplicability of the framework of frequentist statistics to the real world? | As to the first critique, it could be a critique of any and all branches of the sciences. There are no perfectly repeatable experiments. It isn't really possible to completely control any experiment. A meteor could strike the location of the experiment, for example.
Also, the ability to repeat an experiment is irrel... | How to answer critiques about the inapplicability of the framework of frequentist statistics to the | As to the first critique, it could be a critique of any and all branches of the sciences. There are no perfectly repeatable experiments. It isn't really possible to completely control any experiment | How to answer critiques about the inapplicability of the framework of frequentist statistics to the real world?
As to the first critique, it could be a critique of any and all branches of the sciences. There are no perfectly repeatable experiments. It isn't really possible to completely control any experiment. A met... | How to answer critiques about the inapplicability of the framework of frequentist statistics to the
As to the first critique, it could be a critique of any and all branches of the sciences. There are no perfectly repeatable experiments. It isn't really possible to completely control any experiment |
44,177 | How to answer critiques about the inapplicability of the framework of frequentist statistics to the real world? | I think the issue with the arguments raised in the question is the naive realist philosophy of models apparently behind it.
If we model an experiment in a frequentist manner, what we do is that, when using the model, we treat the experiment as if it would be infinitely repeatable, with random outcomes the relative freq... | How to answer critiques about the inapplicability of the framework of frequentist statistics to the | I think the issue with the arguments raised in the question is the naive realist philosophy of models apparently behind it.
If we model an experiment in a frequentist manner, what we do is that, when | How to answer critiques about the inapplicability of the framework of frequentist statistics to the real world?
I think the issue with the arguments raised in the question is the naive realist philosophy of models apparently behind it.
If we model an experiment in a frequentist manner, what we do is that, when using th... | How to answer critiques about the inapplicability of the framework of frequentist statistics to the
I think the issue with the arguments raised in the question is the naive realist philosophy of models apparently behind it.
If we model an experiment in a frequentist manner, what we do is that, when |
44,178 | How to answer critiques about the inapplicability of the framework of frequentist statistics to the real world? | Having read the blog post, I think the author is saying that we shouldn't use randomness in models of the real world because the real world is not random, since everything (such as a coin flip) actually has a cause.
This makes probability theory the science of last resort. Only after
truly exhausting your ability to i... | How to answer critiques about the inapplicability of the framework of frequentist statistics to the | Having read the blog post, I think the author is saying that we shouldn't use randomness in models of the real world because the real world is not random, since everything (such as a coin flip) actual | How to answer critiques about the inapplicability of the framework of frequentist statistics to the real world?
Having read the blog post, I think the author is saying that we shouldn't use randomness in models of the real world because the real world is not random, since everything (such as a coin flip) actually has a... | How to answer critiques about the inapplicability of the framework of frequentist statistics to the
Having read the blog post, I think the author is saying that we shouldn't use randomness in models of the real world because the real world is not random, since everything (such as a coin flip) actual |
44,179 | Elementary explanation of Gaussian Processes | A stochastic process $X(t), t \in T$ is a Gaussian process (GP) if $\sum_i a_i X(t_i)$ is a Gaussian random variable for any such linear combination. Equivalently, it is a GP if all its finite-dimensional distributions are (multivariate) Gaussian, that is $(X(t_1),X(t_2),\dots,X(t_n))$ is Gaussian for any choice of $\{... | Elementary explanation of Gaussian Processes | A stochastic process $X(t), t \in T$ is a Gaussian process (GP) if $\sum_i a_i X(t_i)$ is a Gaussian random variable for any such linear combination. Equivalently, it is a GP if all its finite-dimensi | Elementary explanation of Gaussian Processes
A stochastic process $X(t), t \in T$ is a Gaussian process (GP) if $\sum_i a_i X(t_i)$ is a Gaussian random variable for any such linear combination. Equivalently, it is a GP if all its finite-dimensional distributions are (multivariate) Gaussian, that is $(X(t_1),X(t_2),\do... | Elementary explanation of Gaussian Processes
A stochastic process $X(t), t \in T$ is a Gaussian process (GP) if $\sum_i a_i X(t_i)$ is a Gaussian random variable for any such linear combination. Equivalently, it is a GP if all its finite-dimensi |
44,180 | Elementary explanation of Gaussian Processes | You are given points sampled from a function, that is not known, this is your data. Traditional curve fitting algorithms would try to find a function that fits the data the best. Gaussian process learns distribution over functions, i.e. you could sample from this distribution the functions that are consistent with the ... | Elementary explanation of Gaussian Processes | You are given points sampled from a function, that is not known, this is your data. Traditional curve fitting algorithms would try to find a function that fits the data the best. Gaussian process lear | Elementary explanation of Gaussian Processes
You are given points sampled from a function, that is not known, this is your data. Traditional curve fitting algorithms would try to find a function that fits the data the best. Gaussian process learns distribution over functions, i.e. you could sample from this distributio... | Elementary explanation of Gaussian Processes
You are given points sampled from a function, that is not known, this is your data. Traditional curve fitting algorithms would try to find a function that fits the data the best. Gaussian process lear |
44,181 | Elementary explanation of Gaussian Processes | I found this article very helpful towards building an intuition of GPs.
The mean and covariance functions you define for your prior distributions sets up the distribution that you can sample functions from, where the covariance affects the shape (wiggliness, trend, periodicity) of the functions.
Given observation poin... | Elementary explanation of Gaussian Processes | I found this article very helpful towards building an intuition of GPs.
The mean and covariance functions you define for your prior distributions sets up the distribution that you can sample functions | Elementary explanation of Gaussian Processes
I found this article very helpful towards building an intuition of GPs.
The mean and covariance functions you define for your prior distributions sets up the distribution that you can sample functions from, where the covariance affects the shape (wiggliness, trend, periodici... | Elementary explanation of Gaussian Processes
I found this article very helpful towards building an intuition of GPs.
The mean and covariance functions you define for your prior distributions sets up the distribution that you can sample functions |
44,182 | Suppose $\mathbf{X, Y}$ are independent random vectors. Are their components independent? [duplicate] | If the two vectors are indepdendent, we have $p(\textbf{X,Y})=p(\textbf{X})p(\textbf{Y})$. Considering a specific pair $X_i$,$Y_j$, $$\begin{align}p(X_i,Y_j) &=\int_{X_k,k\neq i}\int_{Y_m,m\neq j}P(\textbf{X},\textbf{Y})\\ &=\int_{X_k,k\neq i}\int_{Y_m,m\neq j}P(\textbf{X})P(\textbf{Y})\\ &=\int_{X_k,k\neq i}P(\textbf{... | Suppose $\mathbf{X, Y}$ are independent random vectors. Are their components independent? [duplicate | If the two vectors are indepdendent, we have $p(\textbf{X,Y})=p(\textbf{X})p(\textbf{Y})$. Considering a specific pair $X_i$,$Y_j$, $$\begin{align}p(X_i,Y_j) &=\int_{X_k,k\neq i}\int_{Y_m,m\neq j}P(\t | Suppose $\mathbf{X, Y}$ are independent random vectors. Are their components independent? [duplicate]
If the two vectors are indepdendent, we have $p(\textbf{X,Y})=p(\textbf{X})p(\textbf{Y})$. Considering a specific pair $X_i$,$Y_j$, $$\begin{align}p(X_i,Y_j) &=\int_{X_k,k\neq i}\int_{Y_m,m\neq j}P(\textbf{X},\textbf{Y... | Suppose $\mathbf{X, Y}$ are independent random vectors. Are their components independent? [duplicate
If the two vectors are indepdendent, we have $p(\textbf{X,Y})=p(\textbf{X})p(\textbf{Y})$. Considering a specific pair $X_i$,$Y_j$, $$\begin{align}p(X_i,Y_j) &=\int_{X_k,k\neq i}\int_{Y_m,m\neq j}P(\t |
44,183 | Suppose $\mathbf{X, Y}$ are independent random vectors. Are their components independent? [duplicate] | In addition to the answer by @gunes, here it is better to use the definitions directly. Two random variables (or vectors, as in this case) $\mathbf{X}, \mathbf{Y}$ are independent if all events determined by $\mathbf{X}$ are independent from all events determined by $\mathbf{Y}^\dagger$.
But an event determined by $X_... | Suppose $\mathbf{X, Y}$ are independent random vectors. Are their components independent? [duplicate | In addition to the answer by @gunes, here it is better to use the definitions directly. Two random variables (or vectors, as in this case) $\mathbf{X}, \mathbf{Y}$ are independent if all events determ | Suppose $\mathbf{X, Y}$ are independent random vectors. Are their components independent? [duplicate]
In addition to the answer by @gunes, here it is better to use the definitions directly. Two random variables (or vectors, as in this case) $\mathbf{X}, \mathbf{Y}$ are independent if all events determined by $\mathbf{X... | Suppose $\mathbf{X, Y}$ are independent random vectors. Are their components independent? [duplicate
In addition to the answer by @gunes, here it is better to use the definitions directly. Two random variables (or vectors, as in this case) $\mathbf{X}, \mathbf{Y}$ are independent if all events determ |
44,184 | Can log likelihood function be negative | The likelihood function is defined as
$$
\mathcal{L}(\theta|X) = \prod_{i=1}^n f_\theta(X_i)
$$
and is a product of probability mass functions (discrete variables) or probability density functions (continuous variables) $f_\theta$ parametrized by $\theta$ and evaluated at the $X_i$ points.
Probability densities are non... | Can log likelihood function be negative | The likelihood function is defined as
$$
\mathcal{L}(\theta|X) = \prod_{i=1}^n f_\theta(X_i)
$$
and is a product of probability mass functions (discrete variables) or probability density functions (co | Can log likelihood function be negative
The likelihood function is defined as
$$
\mathcal{L}(\theta|X) = \prod_{i=1}^n f_\theta(X_i)
$$
and is a product of probability mass functions (discrete variables) or probability density functions (continuous variables) $f_\theta$ parametrized by $\theta$ and evaluated at the $X_... | Can log likelihood function be negative
The likelihood function is defined as
$$
\mathcal{L}(\theta|X) = \prod_{i=1}^n f_\theta(X_i)
$$
and is a product of probability mass functions (discrete variables) or probability density functions (co |
44,185 | Why did I get a negative adjusted-$R^2$ in simple linear regression? | Adjusted $R^2$ is:
$${R}_\text{adj}^{2}={1-(1-R^{2}){n-1 \over n-p-1}}$$
where $p$ is the number of predictors (not counting the intercept) and $n$ is the number of observations.
This will be less than $0$ when
$$\frac{p}{n-1}>R^2\,.$$
$R^2$ can be as low as $0$, so this may happen any time $p>0$. This means that it ca... | Why did I get a negative adjusted-$R^2$ in simple linear regression? | Adjusted $R^2$ is:
$${R}_\text{adj}^{2}={1-(1-R^{2}){n-1 \over n-p-1}}$$
where $p$ is the number of predictors (not counting the intercept) and $n$ is the number of observations.
This will be less tha | Why did I get a negative adjusted-$R^2$ in simple linear regression?
Adjusted $R^2$ is:
$${R}_\text{adj}^{2}={1-(1-R^{2}){n-1 \over n-p-1}}$$
where $p$ is the number of predictors (not counting the intercept) and $n$ is the number of observations.
This will be less than $0$ when
$$\frac{p}{n-1}>R^2\,.$$
$R^2$ can be as... | Why did I get a negative adjusted-$R^2$ in simple linear regression?
Adjusted $R^2$ is:
$${R}_\text{adj}^{2}={1-(1-R^{2}){n-1 \over n-p-1}}$$
where $p$ is the number of predictors (not counting the intercept) and $n$ is the number of observations.
This will be less tha |
44,186 | Why did I get a negative adjusted-$R^2$ in simple linear regression? | A way to conceptualize this is that an adjusted $R^2$ estimates the population $R^2$, so an unbiased estimator of a population $R^2$ of zero has to average zero, thus necessitating that some sample estimates must be below zero. | Why did I get a negative adjusted-$R^2$ in simple linear regression? | A way to conceptualize this is that an adjusted $R^2$ estimates the population $R^2$, so an unbiased estimator of a population $R^2$ of zero has to average zero, thus necessitating that some sample es | Why did I get a negative adjusted-$R^2$ in simple linear regression?
A way to conceptualize this is that an adjusted $R^2$ estimates the population $R^2$, so an unbiased estimator of a population $R^2$ of zero has to average zero, thus necessitating that some sample estimates must be below zero. | Why did I get a negative adjusted-$R^2$ in simple linear regression?
A way to conceptualize this is that an adjusted $R^2$ estimates the population $R^2$, so an unbiased estimator of a population $R^2$ of zero has to average zero, thus necessitating that some sample es |
44,187 | Relationship between logistic regression and Softmax Regression with 2 classes | Suppose you have a binary classification problem with $p$ features (including bias) and you do Multi-class regression with softmax activation. Then, the probability of an observation, $x,$ representing class $1$ is,
$$
\begin{split}
p_1(x) &= \frac{\exp(\beta_1^T x)}{\exp(\beta_1^T x) + \exp(\beta_2^T x)} \\
&= \frac{1... | Relationship between logistic regression and Softmax Regression with 2 classes | Suppose you have a binary classification problem with $p$ features (including bias) and you do Multi-class regression with softmax activation. Then, the probability of an observation, $x,$ representin | Relationship between logistic regression and Softmax Regression with 2 classes
Suppose you have a binary classification problem with $p$ features (including bias) and you do Multi-class regression with softmax activation. Then, the probability of an observation, $x,$ representing class $1$ is,
$$
\begin{split}
p_1(x) &... | Relationship between logistic regression and Softmax Regression with 2 classes
Suppose you have a binary classification problem with $p$ features (including bias) and you do Multi-class regression with softmax activation. Then, the probability of an observation, $x,$ representin |
44,188 | Relationship between logistic regression and Softmax Regression with 2 classes | In multinomial regression we model odds of observing $Y=k$ for each of the $K-1$ classes relatively to the $K$-th class. So with $K=2$ the model reduces to logistic regression.
Let me quote Wikipedia:
One fairly simple way to arrive at the multinomial logit model is to
imagine, for $K$ possible outcomes, running $K-... | Relationship between logistic regression and Softmax Regression with 2 classes | In multinomial regression we model odds of observing $Y=k$ for each of the $K-1$ classes relatively to the $K$-th class. So with $K=2$ the model reduces to logistic regression.
Let me quote Wikipedia: | Relationship between logistic regression and Softmax Regression with 2 classes
In multinomial regression we model odds of observing $Y=k$ for each of the $K-1$ classes relatively to the $K$-th class. So with $K=2$ the model reduces to logistic regression.
Let me quote Wikipedia:
One fairly simple way to arrive at the ... | Relationship between logistic regression and Softmax Regression with 2 classes
In multinomial regression we model odds of observing $Y=k$ for each of the $K-1$ classes relatively to the $K$-th class. So with $K=2$ the model reduces to logistic regression.
Let me quote Wikipedia: |
44,189 | Regression trees - how are splits decided | Well, it depends on the implementation you are using. I assume we are talking about the original CART paper [1]
1) then there is always a single split resulting in two children.
2) The value used for splitting is determined by testing every value for every variable, that the one which minimizes the sum of squares error... | Regression trees - how are splits decided | Well, it depends on the implementation you are using. I assume we are talking about the original CART paper [1]
1) then there is always a single split resulting in two children.
2) The value used for | Regression trees - how are splits decided
Well, it depends on the implementation you are using. I assume we are talking about the original CART paper [1]
1) then there is always a single split resulting in two children.
2) The value used for splitting is determined by testing every value for every variable, that the on... | Regression trees - how are splits decided
Well, it depends on the implementation you are using. I assume we are talking about the original CART paper [1]
1) then there is always a single split resulting in two children.
2) The value used for |
44,190 | Is this really perfect separation in logistic regression, or is something else going on? | Looking at this
Coefficients:
(Intercept) SEX
-3.157e+01 -2.249e-13
I see that your model is returning a numeric zero for the coefficient of SEX ($-2.2 \times 10^{-13}$ may as well be $0$), and is driving the intercept to $-31.57$. Plugging that value into the logistic function in my R interpreter I get... | Is this really perfect separation in logistic regression, or is something else going on? | Looking at this
Coefficients:
(Intercept) SEX
-3.157e+01 -2.249e-13
I see that your model is returning a numeric zero for the coefficient of SEX ($-2.2 \times 10^{-13}$ may as well be $ | Is this really perfect separation in logistic regression, or is something else going on?
Looking at this
Coefficients:
(Intercept) SEX
-3.157e+01 -2.249e-13
I see that your model is returning a numeric zero for the coefficient of SEX ($-2.2 \times 10^{-13}$ may as well be $0$), and is driving the interce... | Is this really perfect separation in logistic regression, or is something else going on?
Looking at this
Coefficients:
(Intercept) SEX
-3.157e+01 -2.249e-13
I see that your model is returning a numeric zero for the coefficient of SEX ($-2.2 \times 10^{-13}$ may as well be $ |
44,191 | Is this really perfect separation in logistic regression, or is something else going on? | I think with a sample of 16000 is unlikely you have perfect prediction, try doing cross tabulations of each variable before doing the individual logit models and see if there is perfect prediction. This way you can also check if the response variable is coded as an indicator. | Is this really perfect separation in logistic regression, or is something else going on? | I think with a sample of 16000 is unlikely you have perfect prediction, try doing cross tabulations of each variable before doing the individual logit models and see if there is perfect prediction. Th | Is this really perfect separation in logistic regression, or is something else going on?
I think with a sample of 16000 is unlikely you have perfect prediction, try doing cross tabulations of each variable before doing the individual logit models and see if there is perfect prediction. This way you can also check if th... | Is this really perfect separation in logistic regression, or is something else going on?
I think with a sample of 16000 is unlikely you have perfect prediction, try doing cross tabulations of each variable before doing the individual logit models and see if there is perfect prediction. Th |
44,192 | Is this really perfect separation in logistic regression, or is something else going on? | First, it's upsetting to see that your statistics professor is training you to use stepwise selection for model building. See this page for an introduction to the problems with stepwise selection and for choices of better alternatives; follow the stepwise-regression and model-selection tags on this site.
With 3 levels ... | Is this really perfect separation in logistic regression, or is something else going on? | First, it's upsetting to see that your statistics professor is training you to use stepwise selection for model building. See this page for an introduction to the problems with stepwise selection and | Is this really perfect separation in logistic regression, or is something else going on?
First, it's upsetting to see that your statistics professor is training you to use stepwise selection for model building. See this page for an introduction to the problems with stepwise selection and for choices of better alternati... | Is this really perfect separation in logistic regression, or is something else going on?
First, it's upsetting to see that your statistics professor is training you to use stepwise selection for model building. See this page for an introduction to the problems with stepwise selection and |
44,193 | What value of $\alpha$ makes $\sum_{i=0}^n (x_i-\alpha)^2$ minimal? | As I mentioned in comments, showing what minimizes $\sum (x_i-\alpha)^2$ can be done in several ways, such as by simple calculus, or by writing $\sum (x_i-\alpha)^2=\sum (x_i-\bar{x}+\bar{x}-\alpha)^2$. Let's look at the second one:
$\sum (x_i-\alpha)^2=\sum (x_i-\bar{x}+\bar{x}-\alpha)^2$
$\hspace{2.55cm}=\sum (x_i-\b... | What value of $\alpha$ makes $\sum_{i=0}^n (x_i-\alpha)^2$ minimal? | As I mentioned in comments, showing what minimizes $\sum (x_i-\alpha)^2$ can be done in several ways, such as by simple calculus, or by writing $\sum (x_i-\alpha)^2=\sum (x_i-\bar{x}+\bar{x}-\alpha)^2 | What value of $\alpha$ makes $\sum_{i=0}^n (x_i-\alpha)^2$ minimal?
As I mentioned in comments, showing what minimizes $\sum (x_i-\alpha)^2$ can be done in several ways, such as by simple calculus, or by writing $\sum (x_i-\alpha)^2=\sum (x_i-\bar{x}+\bar{x}-\alpha)^2$. Let's look at the second one:
$\sum (x_i-\alpha)^... | What value of $\alpha$ makes $\sum_{i=0}^n (x_i-\alpha)^2$ minimal?
As I mentioned in comments, showing what minimizes $\sum (x_i-\alpha)^2$ can be done in several ways, such as by simple calculus, or by writing $\sum (x_i-\alpha)^2=\sum (x_i-\bar{x}+\bar{x}-\alpha)^2 |
44,194 | What value of $\alpha$ makes $\sum_{i=0}^n (x_i-\alpha)^2$ minimal? | Putting the first derivative (with respect to $\alpha$) equal to zero you find $2\sum_i (x_i -\alpha) (-1) = 0$ so $\sum_i x_i = n \alpha$ or $\alpha = \frac{1}{n} \sum_i x_i$ | What value of $\alpha$ makes $\sum_{i=0}^n (x_i-\alpha)^2$ minimal? | Putting the first derivative (with respect to $\alpha$) equal to zero you find $2\sum_i (x_i -\alpha) (-1) = 0$ so $\sum_i x_i = n \alpha$ or $\alpha = \frac{1}{n} \sum_i x_i$ | What value of $\alpha$ makes $\sum_{i=0}^n (x_i-\alpha)^2$ minimal?
Putting the first derivative (with respect to $\alpha$) equal to zero you find $2\sum_i (x_i -\alpha) (-1) = 0$ so $\sum_i x_i = n \alpha$ or $\alpha = \frac{1}{n} \sum_i x_i$ | What value of $\alpha$ makes $\sum_{i=0}^n (x_i-\alpha)^2$ minimal?
Putting the first derivative (with respect to $\alpha$) equal to zero you find $2\sum_i (x_i -\alpha) (-1) = 0$ so $\sum_i x_i = n \alpha$ or $\alpha = \frac{1}{n} \sum_i x_i$ |
44,195 | Is the accessible population a random sample? | This is an important question, made explicit by Deming and Stephan (1941), who used first the word "superpopulation" to describe the approach with that name: assume that the current population is itself a sample from a larger, hypothetical, population. The concept is implicit also in Cochran (1939). See Stanek, 2000b,... | Is the accessible population a random sample? | This is an important question, made explicit by Deming and Stephan (1941), who used first the word "superpopulation" to describe the approach with that name: assume that the current population is its | Is the accessible population a random sample?
This is an important question, made explicit by Deming and Stephan (1941), who used first the word "superpopulation" to describe the approach with that name: assume that the current population is itself a sample from a larger, hypothetical, population. The concept is impli... | Is the accessible population a random sample?
This is an important question, made explicit by Deming and Stephan (1941), who used first the word "superpopulation" to describe the approach with that name: assume that the current population is its |
44,196 | Is the accessible population a random sample? | At its face value, this is a convenience sample. Real sampling involves randomization, and I don't think any university will allow you to randomly put students to sections. There's undoubtedly an issue of self-selection that produces biased samples with skewed prevalences of students with different backgrounds and char... | Is the accessible population a random sample? | At its face value, this is a convenience sample. Real sampling involves randomization, and I don't think any university will allow you to randomly put students to sections. There's undoubtedly an issu | Is the accessible population a random sample?
At its face value, this is a convenience sample. Real sampling involves randomization, and I don't think any university will allow you to randomly put students to sections. There's undoubtedly an issue of self-selection that produces biased samples with skewed prevalences o... | Is the accessible population a random sample?
At its face value, this is a convenience sample. Real sampling involves randomization, and I don't think any university will allow you to randomly put students to sections. There's undoubtedly an issu |
44,197 | What is the difference between various Kruskal-Wallis post-hoc tests? | Understanding how these test implementations differ requires understanding the actual test statistics themselves.
For example, dunn.test provides Dunn's (1964) z test approximation to a rank sum test employing both the same ranks used in the Kruskal-Wallis test, and the pooled variance estimate implied by the null hypo... | What is the difference between various Kruskal-Wallis post-hoc tests? | Understanding how these test implementations differ requires understanding the actual test statistics themselves.
For example, dunn.test provides Dunn's (1964) z test approximation to a rank sum test | What is the difference between various Kruskal-Wallis post-hoc tests?
Understanding how these test implementations differ requires understanding the actual test statistics themselves.
For example, dunn.test provides Dunn's (1964) z test approximation to a rank sum test employing both the same ranks used in the Kruskal-... | What is the difference between various Kruskal-Wallis post-hoc tests?
Understanding how these test implementations differ requires understanding the actual test statistics themselves.
For example, dunn.test provides Dunn's (1964) z test approximation to a rank sum test |
44,198 | What is the difference between various Kruskal-Wallis post-hoc tests? | I know that this thread is older, but I came across it because I was looking for answers about the post-hoc test applied to the Kruskal-Wallis test found in the agricolae package. I really needed to know for documentation purposes so I personally emailed the maintainer to ask what procedure is used for the post-hoc tes... | What is the difference between various Kruskal-Wallis post-hoc tests? | I know that this thread is older, but I came across it because I was looking for answers about the post-hoc test applied to the Kruskal-Wallis test found in the agricolae package. I really needed to k | What is the difference between various Kruskal-Wallis post-hoc tests?
I know that this thread is older, but I came across it because I was looking for answers about the post-hoc test applied to the Kruskal-Wallis test found in the agricolae package. I really needed to know for documentation purposes so I personally ema... | What is the difference between various Kruskal-Wallis post-hoc tests?
I know that this thread is older, but I came across it because I was looking for answers about the post-hoc test applied to the Kruskal-Wallis test found in the agricolae package. I really needed to k |
44,199 | What is the famous data set that looks totally different but has similar summary stats? | You must be thinking of Anscombe's quartet. | What is the famous data set that looks totally different but has similar summary stats? | You must be thinking of Anscombe's quartet. | What is the famous data set that looks totally different but has similar summary stats?
You must be thinking of Anscombe's quartet. | What is the famous data set that looks totally different but has similar summary stats?
You must be thinking of Anscombe's quartet. |
44,200 | What is the famous data set that looks totally different but has similar summary stats? | Anscombe quartet is the name (as said before), and its standard plots are below.
It was constructed in from Graphs in Statistical Analysis, The American Statistician, 1973. Since then, there have been attemps to reproduce or generalize it on a broader extend, for instance:
Generating Data with Identical Statistics bu... | What is the famous data set that looks totally different but has similar summary stats? | Anscombe quartet is the name (as said before), and its standard plots are below.
It was constructed in from Graphs in Statistical Analysis, The American Statistician, 1973. Since then, there have bee | What is the famous data set that looks totally different but has similar summary stats?
Anscombe quartet is the name (as said before), and its standard plots are below.
It was constructed in from Graphs in Statistical Analysis, The American Statistician, 1973. Since then, there have been attemps to reproduce or genera... | What is the famous data set that looks totally different but has similar summary stats?
Anscombe quartet is the name (as said before), and its standard plots are below.
It was constructed in from Graphs in Statistical Analysis, The American Statistician, 1973. Since then, there have bee |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.