idx int64 1 56k | question stringlengths 15 155 | answer stringlengths 2 29.2k ⌀ | question_cut stringlengths 15 100 | answer_cut stringlengths 2 200 ⌀ | conversation stringlengths 47 29.3k | conversation_cut stringlengths 47 301 |
|---|---|---|---|---|---|---|
10,201 | How to easily determine the results distribution for multiple dice? | Characteristic functions can make computations involving the sums and differences of random variables really easy. Mathematica has lots of functions to work with statistical distributions, including a builtin to transform a distribution into its characteristic function.
I'd like to illustrate this with two concrete exa... | How to easily determine the results distribution for multiple dice? | Characteristic functions can make computations involving the sums and differences of random variables really easy. Mathematica has lots of functions to work with statistical distributions, including a | How to easily determine the results distribution for multiple dice?
Characteristic functions can make computations involving the sums and differences of random variables really easy. Mathematica has lots of functions to work with statistical distributions, including a builtin to transform a distribution into its charac... | How to easily determine the results distribution for multiple dice?
Characteristic functions can make computations involving the sums and differences of random variables really easy. Mathematica has lots of functions to work with statistical distributions, including a |
10,202 | How to easily determine the results distribution for multiple dice? | Approximate Solution
I explained the exact solution earlier (see below). I will now offer an approximate solution which may suit your needs better.
Let:
$X_i$ be the outcome of a roll of a $s$ faced dice where $i=1, ... n$.
$S$ be the total of all $n$ dice.
$\bar{X}$ be the sample average.
By definition, we have:
$\ba... | How to easily determine the results distribution for multiple dice? | Approximate Solution
I explained the exact solution earlier (see below). I will now offer an approximate solution which may suit your needs better.
Let:
$X_i$ be the outcome of a roll of a $s$ faced | How to easily determine the results distribution for multiple dice?
Approximate Solution
I explained the exact solution earlier (see below). I will now offer an approximate solution which may suit your needs better.
Let:
$X_i$ be the outcome of a roll of a $s$ faced dice where $i=1, ... n$.
$S$ be the total of all $n$... | How to easily determine the results distribution for multiple dice?
Approximate Solution
I explained the exact solution earlier (see below). I will now offer an approximate solution which may suit your needs better.
Let:
$X_i$ be the outcome of a roll of a $s$ faced |
10,203 | How to easily determine the results distribution for multiple dice? | Here's another way to calculate the probability distribution of the sum of two dice by hand using convolutions.
To keep the example really simple, we're going to calculate the probability distribution of the sum of a three-sided die (d3) whose random variable we will call X and a two-sided die (d2) whose random variabl... | How to easily determine the results distribution for multiple dice? | Here's another way to calculate the probability distribution of the sum of two dice by hand using convolutions.
To keep the example really simple, we're going to calculate the probability distribution | How to easily determine the results distribution for multiple dice?
Here's another way to calculate the probability distribution of the sum of two dice by hand using convolutions.
To keep the example really simple, we're going to calculate the probability distribution of the sum of a three-sided die (d3) whose random v... | How to easily determine the results distribution for multiple dice?
Here's another way to calculate the probability distribution of the sum of two dice by hand using convolutions.
To keep the example really simple, we're going to calculate the probability distribution |
10,204 | How to easily determine the results distribution for multiple dice? | Love the username! Well done :)
The outcomes you should count are the dice rolls, all $6\times 6=36$ of them as shown in your table.
For example, $\frac{1}{36}$ of the time the sum is $2$, and $\frac{2}{36}$ of the time the sum is $3$, and $\frac{4}{36}$ of the time the sum is $4$, and so on. | How to easily determine the results distribution for multiple dice? | Love the username! Well done :)
The outcomes you should count are the dice rolls, all $6\times 6=36$ of them as shown in your table.
For example, $\frac{1}{36}$ of the time the sum is $2$, and $\frac | How to easily determine the results distribution for multiple dice?
Love the username! Well done :)
The outcomes you should count are the dice rolls, all $6\times 6=36$ of them as shown in your table.
For example, $\frac{1}{36}$ of the time the sum is $2$, and $\frac{2}{36}$ of the time the sum is $3$, and $\frac{4}{3... | How to easily determine the results distribution for multiple dice?
Love the username! Well done :)
The outcomes you should count are the dice rolls, all $6\times 6=36$ of them as shown in your table.
For example, $\frac{1}{36}$ of the time the sum is $2$, and $\frac |
10,205 | How to easily determine the results distribution for multiple dice? | You can solve this with a recursive formula. In that case the probabilities of the rolls with $n$ dice are calculated by the rolls with $n-1$ dice.
$$a_n(l) = \sum_{l-6 \leq k \leq l-1 \\ \text{ and } n-1 \leq k \leq 6(n-1)} a_{n-1}(k)$$
The first limit for k in the summation are the six preceding numbers. E.g if You w... | How to easily determine the results distribution for multiple dice? | You can solve this with a recursive formula. In that case the probabilities of the rolls with $n$ dice are calculated by the rolls with $n-1$ dice.
$$a_n(l) = \sum_{l-6 \leq k \leq l-1 \\ \text{ and } | How to easily determine the results distribution for multiple dice?
You can solve this with a recursive formula. In that case the probabilities of the rolls with $n$ dice are calculated by the rolls with $n-1$ dice.
$$a_n(l) = \sum_{l-6 \leq k \leq l-1 \\ \text{ and } n-1 \leq k \leq 6(n-1)} a_{n-1}(k)$$
The first limi... | How to easily determine the results distribution for multiple dice?
You can solve this with a recursive formula. In that case the probabilities of the rolls with $n$ dice are calculated by the rolls with $n-1$ dice.
$$a_n(l) = \sum_{l-6 \leq k \leq l-1 \\ \text{ and } |
10,206 | How to easily determine the results distribution for multiple dice? | One approach is to say that the probability $X_n=k$ is the coefficient of $x^{k}$ in the expansion of the generating function $$\left(\frac{x^6+x^5+x^4+x^3+x^2+x^1}{6}\right)^n=\left(\frac{x(1-x^6)}{6(1-x)}\right)^n$$
So for example with six dice and a target of $k=22$, you will find $P(X_6=22)= \frac{10}{6^6}$. That ... | How to easily determine the results distribution for multiple dice? | One approach is to say that the probability $X_n=k$ is the coefficient of $x^{k}$ in the expansion of the generating function $$\left(\frac{x^6+x^5+x^4+x^3+x^2+x^1}{6}\right)^n=\left(\frac{x(1-x^6)}{6 | How to easily determine the results distribution for multiple dice?
One approach is to say that the probability $X_n=k$ is the coefficient of $x^{k}$ in the expansion of the generating function $$\left(\frac{x^6+x^5+x^4+x^3+x^2+x^1}{6}\right)^n=\left(\frac{x(1-x^6)}{6(1-x)}\right)^n$$
So for example with six dice and a... | How to easily determine the results distribution for multiple dice?
One approach is to say that the probability $X_n=k$ is the coefficient of $x^{k}$ in the expansion of the generating function $$\left(\frac{x^6+x^5+x^4+x^3+x^2+x^1}{6}\right)^n=\left(\frac{x(1-x^6)}{6 |
10,207 | What is importance sampling? | Importance sampling is a form of sampling from a distribution different from the distribution of interest to more easily obtain better estimates of a parameter from the distribution of interest. Typically this will provide estimates of the parameter with a lower variance than would be obtained by sampling directly from... | What is importance sampling? | Importance sampling is a form of sampling from a distribution different from the distribution of interest to more easily obtain better estimates of a parameter from the distribution of interest. Typic | What is importance sampling?
Importance sampling is a form of sampling from a distribution different from the distribution of interest to more easily obtain better estimates of a parameter from the distribution of interest. Typically this will provide estimates of the parameter with a lower variance than would be obtai... | What is importance sampling?
Importance sampling is a form of sampling from a distribution different from the distribution of interest to more easily obtain better estimates of a parameter from the distribution of interest. Typic |
10,208 | What is importance sampling? | Importance sampling is a simulation or Monte Carlo method intended for approximating integrals. The term "sampling" is somewhat confusing in that it does not intend to provide samples from a given distribution.
The intuition behind importance sampling is that a well-defined integral, like
$$\mathfrak{I}=\int_\mathfrak{... | What is importance sampling? | Importance sampling is a simulation or Monte Carlo method intended for approximating integrals. The term "sampling" is somewhat confusing in that it does not intend to provide samples from a given dis | What is importance sampling?
Importance sampling is a simulation or Monte Carlo method intended for approximating integrals. The term "sampling" is somewhat confusing in that it does not intend to provide samples from a given distribution.
The intuition behind importance sampling is that a well-defined integral, like
$... | What is importance sampling?
Importance sampling is a simulation or Monte Carlo method intended for approximating integrals. The term "sampling" is somewhat confusing in that it does not intend to provide samples from a given dis |
10,209 | Why are standard frequentist hypotheses so uninteresting? | You can regard this as an example of 'All models are wrong but some are useful'. Null hypothesis testing is a simplification.
Null hypothesis testing is often not the primary goal and instead it is more like a tool for some other goal, and it is used as an indicator of the quality of a certain result/measurement.
An e... | Why are standard frequentist hypotheses so uninteresting? | You can regard this as an example of 'All models are wrong but some are useful'. Null hypothesis testing is a simplification.
Null hypothesis testing is often not the primary goal and instead it is m | Why are standard frequentist hypotheses so uninteresting?
You can regard this as an example of 'All models are wrong but some are useful'. Null hypothesis testing is a simplification.
Null hypothesis testing is often not the primary goal and instead it is more like a tool for some other goal, and it is used as an indi... | Why are standard frequentist hypotheses so uninteresting?
You can regard this as an example of 'All models are wrong but some are useful'. Null hypothesis testing is a simplification.
Null hypothesis testing is often not the primary goal and instead it is m |
10,210 | Why are standard frequentist hypotheses so uninteresting? | You have to crawl before you walk, and simple examples like testing a coin for bias, under a null hypothesis of zero bias, make for a teachable example for complete beginners (which every single one of us was at one point).
Jumping straight to, say, equivalence testing without even discussing easier null hypothesis sig... | Why are standard frequentist hypotheses so uninteresting? | You have to crawl before you walk, and simple examples like testing a coin for bias, under a null hypothesis of zero bias, make for a teachable example for complete beginners (which every single one o | Why are standard frequentist hypotheses so uninteresting?
You have to crawl before you walk, and simple examples like testing a coin for bias, under a null hypothesis of zero bias, make for a teachable example for complete beginners (which every single one of us was at one point).
Jumping straight to, say, equivalence ... | Why are standard frequentist hypotheses so uninteresting?
You have to crawl before you walk, and simple examples like testing a coin for bias, under a null hypothesis of zero bias, make for a teachable example for complete beginners (which every single one o |
10,211 | Why are standard frequentist hypotheses so uninteresting? | No model is ever true. This means that not only the null hypothesis is not true, neither is the alternative, nor something like $|\mu_1-\mu_2|<\epsilon$. If you're interested in which model is true, you're generally lost in model-based statistics; there's nothing particularly wrong about standard null hypotheses. Wheth... | Why are standard frequentist hypotheses so uninteresting? | No model is ever true. This means that not only the null hypothesis is not true, neither is the alternative, nor something like $|\mu_1-\mu_2|<\epsilon$. If you're interested in which model is true, y | Why are standard frequentist hypotheses so uninteresting?
No model is ever true. This means that not only the null hypothesis is not true, neither is the alternative, nor something like $|\mu_1-\mu_2|<\epsilon$. If you're interested in which model is true, you're generally lost in model-based statistics; there's nothin... | Why are standard frequentist hypotheses so uninteresting?
No model is ever true. This means that not only the null hypothesis is not true, neither is the alternative, nor something like $|\mu_1-\mu_2|<\epsilon$. If you're interested in which model is true, y |
10,212 | Why are standard frequentist hypotheses so uninteresting? | The appeal of such hypotheses lies in their simplicity and the analytical tractability (or just simplicity) of testing them.*
In "real life" one is always only interested in some finite accuracy $\epsilon$, meaning the hypothesis of interest is actually of the form $H_0: |\mu-\mu_0|<\epsilon$.
This might be true fo... | Why are standard frequentist hypotheses so uninteresting? | The appeal of such hypotheses lies in their simplicity and the analytical tractability (or just simplicity) of testing them.*
In "real life" one is always only interested in some finite accuracy $\e | Why are standard frequentist hypotheses so uninteresting?
The appeal of such hypotheses lies in their simplicity and the analytical tractability (or just simplicity) of testing them.*
In "real life" one is always only interested in some finite accuracy $\epsilon$, meaning the hypothesis of interest is actually of the... | Why are standard frequentist hypotheses so uninteresting?
The appeal of such hypotheses lies in their simplicity and the analytical tractability (or just simplicity) of testing them.*
In "real life" one is always only interested in some finite accuracy $\e |
10,213 | Why are standard frequentist hypotheses so uninteresting? | (1) The more boring a null hypothesis, the more interesting it is when you are unable to reject.
E.g. after one million flips, we still cannot distinguish the coin from perfectly unbiased. (After one million patients, we still cannot distinguish the treatment from placebo.)
(2) If your question isn't a testing question... | Why are standard frequentist hypotheses so uninteresting? | (1) The more boring a null hypothesis, the more interesting it is when you are unable to reject.
E.g. after one million flips, we still cannot distinguish the coin from perfectly unbiased. (After one | Why are standard frequentist hypotheses so uninteresting?
(1) The more boring a null hypothesis, the more interesting it is when you are unable to reject.
E.g. after one million flips, we still cannot distinguish the coin from perfectly unbiased. (After one million patients, we still cannot distinguish the treatment fr... | Why are standard frequentist hypotheses so uninteresting?
(1) The more boring a null hypothesis, the more interesting it is when you are unable to reject.
E.g. after one million flips, we still cannot distinguish the coin from perfectly unbiased. (After one |
10,214 | Why are standard frequentist hypotheses so uninteresting? | To calculate a p-value, you need a null hypothesis such that the test statistic has a known distribution. Simple asserting $|\mu|<\epsilon$ doesn't give you a distribution. | Why are standard frequentist hypotheses so uninteresting? | To calculate a p-value, you need a null hypothesis such that the test statistic has a known distribution. Simple asserting $|\mu|<\epsilon$ doesn't give you a distribution. | Why are standard frequentist hypotheses so uninteresting?
To calculate a p-value, you need a null hypothesis such that the test statistic has a known distribution. Simple asserting $|\mu|<\epsilon$ doesn't give you a distribution. | Why are standard frequentist hypotheses so uninteresting?
To calculate a p-value, you need a null hypothesis such that the test statistic has a known distribution. Simple asserting $|\mu|<\epsilon$ doesn't give you a distribution. |
10,215 | Why are standard frequentist hypotheses so uninteresting? | In "real life" one is always only interested in some finite accuracy ϵ, meaning the hypothesis of interest is actually of the form H0:|μ−μ0|<ϵ.
Indeed, it is an important practical issue that a significant effect turn out to be small and therefore "insignificant" for practical purposes. As an anecdotal example: bilin... | Why are standard frequentist hypotheses so uninteresting? | In "real life" one is always only interested in some finite accuracy ϵ, meaning the hypothesis of interest is actually of the form H0:|μ−μ0|<ϵ.
Indeed, it is an important practical issue that a sign | Why are standard frequentist hypotheses so uninteresting?
In "real life" one is always only interested in some finite accuracy ϵ, meaning the hypothesis of interest is actually of the form H0:|μ−μ0|<ϵ.
Indeed, it is an important practical issue that a significant effect turn out to be small and therefore "insignifica... | Why are standard frequentist hypotheses so uninteresting?
In "real life" one is always only interested in some finite accuracy ϵ, meaning the hypothesis of interest is actually of the form H0:|μ−μ0|<ϵ.
Indeed, it is an important practical issue that a sign |
10,216 | Why are standard frequentist hypotheses so uninteresting? | I wouldn't put too much philosophical labor into contemplating the null hypothesis per se, as the OP does.
As I have discussed here, this is a device through which we can determine whether the data allow us to assert probabilistically the direction of influence.
And the direction of influence is a heavyweight, in all w... | Why are standard frequentist hypotheses so uninteresting? | I wouldn't put too much philosophical labor into contemplating the null hypothesis per se, as the OP does.
As I have discussed here, this is a device through which we can determine whether the data al | Why are standard frequentist hypotheses so uninteresting?
I wouldn't put too much philosophical labor into contemplating the null hypothesis per se, as the OP does.
As I have discussed here, this is a device through which we can determine whether the data allow us to assert probabilistically the direction of influence.... | Why are standard frequentist hypotheses so uninteresting?
I wouldn't put too much philosophical labor into contemplating the null hypothesis per se, as the OP does.
As I have discussed here, this is a device through which we can determine whether the data al |
10,217 | When does Fisher's "go get more data" approach make sense? | The frequentist paradigm is a conflation of Fisher's and Neyman-Pearson's views. Only in using one approach and another interpretation do problems arise.
It should seem strange to anyone that collecting more data is problematic, as more data is more evidence. Indeed, the problem lies not in collecting more data, but in... | When does Fisher's "go get more data" approach make sense? | The frequentist paradigm is a conflation of Fisher's and Neyman-Pearson's views. Only in using one approach and another interpretation do problems arise.
It should seem strange to anyone that collecti | When does Fisher's "go get more data" approach make sense?
The frequentist paradigm is a conflation of Fisher's and Neyman-Pearson's views. Only in using one approach and another interpretation do problems arise.
It should seem strange to anyone that collecting more data is problematic, as more data is more evidence. I... | When does Fisher's "go get more data" approach make sense?
The frequentist paradigm is a conflation of Fisher's and Neyman-Pearson's views. Only in using one approach and another interpretation do problems arise.
It should seem strange to anyone that collecti |
10,218 | When does Fisher's "go get more data" approach make sense? | Given a big enough sample size, a test will always show significant results, unless the true effect size is exactly zero, as discussed here. In practice, the true effect size is not zero, so gathering more data will eventually be able to detect the most minuscule differences.
The (IMO) facetious answer from Fisher was... | When does Fisher's "go get more data" approach make sense? | Given a big enough sample size, a test will always show significant results, unless the true effect size is exactly zero, as discussed here. In practice, the true effect size is not zero, so gatherin | When does Fisher's "go get more data" approach make sense?
Given a big enough sample size, a test will always show significant results, unless the true effect size is exactly zero, as discussed here. In practice, the true effect size is not zero, so gathering more data will eventually be able to detect the most minusc... | When does Fisher's "go get more data" approach make sense?
Given a big enough sample size, a test will always show significant results, unless the true effect size is exactly zero, as discussed here. In practice, the true effect size is not zero, so gatherin |
10,219 | When does Fisher's "go get more data" approach make sense? | What we call P-hacking is applying a significance test multiple times and only reporting the significance results. Whether this is good or bad is situationally dependent.
To explain, let's think about true effects in Bayesian terms, rather than null and alternative hypotheses. As long as we believe our effects of inte... | When does Fisher's "go get more data" approach make sense? | What we call P-hacking is applying a significance test multiple times and only reporting the significance results. Whether this is good or bad is situationally dependent.
To explain, let's think abou | When does Fisher's "go get more data" approach make sense?
What we call P-hacking is applying a significance test multiple times and only reporting the significance results. Whether this is good or bad is situationally dependent.
To explain, let's think about true effects in Bayesian terms, rather than null and altern... | When does Fisher's "go get more data" approach make sense?
What we call P-hacking is applying a significance test multiple times and only reporting the significance results. Whether this is good or bad is situationally dependent.
To explain, let's think abou |
10,220 | When does Fisher's "go get more data" approach make sense? | Thanks. There are a couple things to bear in mind here:
The quote may be apocryphal.
It's quite reasonable to go get more / better data, or data from a different source (more precise scale, cf., @Underminer's answer; different situation or controls; etc.), for a second study (cf., @Glen_b's comment). That is, yo... | When does Fisher's "go get more data" approach make sense? | Thanks. There are a couple things to bear in mind here:
The quote may be apocryphal.
It's quite reasonable to go get more / better data, or data from a different source (more precise scale, cf., | When does Fisher's "go get more data" approach make sense?
Thanks. There are a couple things to bear in mind here:
The quote may be apocryphal.
It's quite reasonable to go get more / better data, or data from a different source (more precise scale, cf., @Underminer's answer; different situation or controls; etc.)... | When does Fisher's "go get more data" approach make sense?
Thanks. There are a couple things to bear in mind here:
The quote may be apocryphal.
It's quite reasonable to go get more / better data, or data from a different source (more precise scale, cf., |
10,221 | When does Fisher's "go get more data" approach make sense? | If the alternative had a small a priori probability, then an experiment that fails to reject the null will decrease it further, making any further research even less cost-effective. For instance, suppose the a priori probability is .01. Then your entropy is .08 bits. If the probability gets reduced to .001, then your e... | When does Fisher's "go get more data" approach make sense? | If the alternative had a small a priori probability, then an experiment that fails to reject the null will decrease it further, making any further research even less cost-effective. For instance, supp | When does Fisher's "go get more data" approach make sense?
If the alternative had a small a priori probability, then an experiment that fails to reject the null will decrease it further, making any further research even less cost-effective. For instance, suppose the a priori probability is .01. Then your entropy is .08... | When does Fisher's "go get more data" approach make sense?
If the alternative had a small a priori probability, then an experiment that fails to reject the null will decrease it further, making any further research even less cost-effective. For instance, supp |
10,222 | R code for time series forecasting using Kalman filter | Have you looked at Time Series Task View at CRAN?
It lists several entries for packages covering Kalman filtering:
dlm
FKF
KFAS
and more as this is a pretty common techique for time series estimation. | R code for time series forecasting using Kalman filter | Have you looked at Time Series Task View at CRAN?
It lists several entries for packages covering Kalman filtering:
dlm
FKF
KFAS
and more as this is a pretty common techique for time series estimati | R code for time series forecasting using Kalman filter
Have you looked at Time Series Task View at CRAN?
It lists several entries for packages covering Kalman filtering:
dlm
FKF
KFAS
and more as this is a pretty common techique for time series estimation. | R code for time series forecasting using Kalman filter
Have you looked at Time Series Task View at CRAN?
It lists several entries for packages covering Kalman filtering:
dlm
FKF
KFAS
and more as this is a pretty common techique for time series estimati |
10,223 | R code for time series forecasting using Kalman filter | In addition to the packages mentioned in other answers, you may want to look at
package forecast which deals with a particular class of models cast in state-space form and package MARSS with examples and applications in biology (see in particular the well-writen manual, Chap. 5).
For general applications, I aggree, th... | R code for time series forecasting using Kalman filter | In addition to the packages mentioned in other answers, you may want to look at
package forecast which deals with a particular class of models cast in state-space form and package MARSS with examples | R code for time series forecasting using Kalman filter
In addition to the packages mentioned in other answers, you may want to look at
package forecast which deals with a particular class of models cast in state-space form and package MARSS with examples and applications in biology (see in particular the well-writen m... | R code for time series forecasting using Kalman filter
In addition to the packages mentioned in other answers, you may want to look at
package forecast which deals with a particular class of models cast in state-space form and package MARSS with examples |
10,224 | R code for time series forecasting using Kalman filter | For good examples look at the dlm vignette I would avoid all the other packages if you don't have a clear idea of what you want to do and how. | R code for time series forecasting using Kalman filter | For good examples look at the dlm vignette I would avoid all the other packages if you don't have a clear idea of what you want to do and how. | R code for time series forecasting using Kalman filter
For good examples look at the dlm vignette I would avoid all the other packages if you don't have a clear idea of what you want to do and how. | R code for time series forecasting using Kalman filter
For good examples look at the dlm vignette I would avoid all the other packages if you don't have a clear idea of what you want to do and how. |
10,225 | R code for time series forecasting using Kalman filter | The package stsm is now available on CRAN. The package offers some utilities to fit the basic structural time series model.
The packages mentioned in other answers provide flexible interfaces to cast a broad range of time series models in state-space form and give sound implementations of the Kalman filter. However, i... | R code for time series forecasting using Kalman filter | The package stsm is now available on CRAN. The package offers some utilities to fit the basic structural time series model.
The packages mentioned in other answers provide flexible interfaces to cast | R code for time series forecasting using Kalman filter
The package stsm is now available on CRAN. The package offers some utilities to fit the basic structural time series model.
The packages mentioned in other answers provide flexible interfaces to cast a broad range of time series models in state-space form and give... | R code for time series forecasting using Kalman filter
The package stsm is now available on CRAN. The package offers some utilities to fit the basic structural time series model.
The packages mentioned in other answers provide flexible interfaces to cast |
10,226 | What is the best way to visualize relationship between discrete and continuous variables? | Below: The original plot may be misleading because the discrete nature of the variables makes the points overlap:
One way to work around it is to introduce some transparency to the data symbol:
Another way is to displace the location of the symbol mildly to create a smear. This technique is called "jittering:"
Both ... | What is the best way to visualize relationship between discrete and continuous variables? | Below: The original plot may be misleading because the discrete nature of the variables makes the points overlap:
One way to work around it is to introduce some transparency to the data symbol:
Anot | What is the best way to visualize relationship between discrete and continuous variables?
Below: The original plot may be misleading because the discrete nature of the variables makes the points overlap:
One way to work around it is to introduce some transparency to the data symbol:
Another way is to displace the loc... | What is the best way to visualize relationship between discrete and continuous variables?
Below: The original plot may be misleading because the discrete nature of the variables makes the points overlap:
One way to work around it is to introduce some transparency to the data symbol:
Anot |
10,227 | What is the best way to visualize relationship between discrete and continuous variables? | I would use boxplots to display the relationship between a discrete and a continuous variable. You can make your boxplots vertical or horizontal with standard statistical software, so it's easy to visualize as either IV or DV. It is possible to use a scatterplot with a discrete and continuous variable, just assign a ... | What is the best way to visualize relationship between discrete and continuous variables? | I would use boxplots to display the relationship between a discrete and a continuous variable. You can make your boxplots vertical or horizontal with standard statistical software, so it's easy to vi | What is the best way to visualize relationship between discrete and continuous variables?
I would use boxplots to display the relationship between a discrete and a continuous variable. You can make your boxplots vertical or horizontal with standard statistical software, so it's easy to visualize as either IV or DV. I... | What is the best way to visualize relationship between discrete and continuous variables?
I would use boxplots to display the relationship between a discrete and a continuous variable. You can make your boxplots vertical or horizontal with standard statistical software, so it's easy to vi |
10,228 | What is the best way to visualize relationship between discrete and continuous variables? | When considering the relationship between a binary outcome variable and a continuous predictor, I would use the loess smoother (with outlier detection turned off, e.g., in R lowess(x, y, iter=0).
In the next release of the R Hmisc package you can easily create a single lattice graphic that puts such curves into a multi... | What is the best way to visualize relationship between discrete and continuous variables? | When considering the relationship between a binary outcome variable and a continuous predictor, I would use the loess smoother (with outlier detection turned off, e.g., in R lowess(x, y, iter=0).
In t | What is the best way to visualize relationship between discrete and continuous variables?
When considering the relationship between a binary outcome variable and a continuous predictor, I would use the loess smoother (with outlier detection turned off, e.g., in R lowess(x, y, iter=0).
In the next release of the R Hmisc... | What is the best way to visualize relationship between discrete and continuous variables?
When considering the relationship between a binary outcome variable and a continuous predictor, I would use the loess smoother (with outlier detection turned off, e.g., in R lowess(x, y, iter=0).
In t |
10,229 | What is the best way to visualize relationship between discrete and continuous variables? | If you are not satisfied with simple scatter plots you might want to add the frequencies of the data points at each value of the discrete variable. How to do this then just depends on the statistical program you are using. Here is an example for Stata.
You can also apply this to the scatter plot of two categorical vari... | What is the best way to visualize relationship between discrete and continuous variables? | If you are not satisfied with simple scatter plots you might want to add the frequencies of the data points at each value of the discrete variable. How to do this then just depends on the statistical | What is the best way to visualize relationship between discrete and continuous variables?
If you are not satisfied with simple scatter plots you might want to add the frequencies of the data points at each value of the discrete variable. How to do this then just depends on the statistical program you are using. Here is... | What is the best way to visualize relationship between discrete and continuous variables?
If you are not satisfied with simple scatter plots you might want to add the frequencies of the data points at each value of the discrete variable. How to do this then just depends on the statistical |
10,230 | What is the best way to visualize relationship between discrete and continuous variables? | I found a paper applicable on association between two binary variables on http://www.boekboek.com/xb130929113026 - here, in that article it is shown and proved that the strenght of association between two binary variables can be expressed as a fraction of perfect association. So it becomes possible and preferable to st... | What is the best way to visualize relationship between discrete and continuous variables? | I found a paper applicable on association between two binary variables on http://www.boekboek.com/xb130929113026 - here, in that article it is shown and proved that the strenght of association between | What is the best way to visualize relationship between discrete and continuous variables?
I found a paper applicable on association between two binary variables on http://www.boekboek.com/xb130929113026 - here, in that article it is shown and proved that the strenght of association between two binary variables can be e... | What is the best way to visualize relationship between discrete and continuous variables?
I found a paper applicable on association between two binary variables on http://www.boekboek.com/xb130929113026 - here, in that article it is shown and proved that the strenght of association between |
10,231 | Why is the logistic distribution called "logistic"? | The source document for the name "logistic" seems to be this 1844 presentation by P.-F. Verhulst, "Recherches mathématiques sur la loi d'accroissement de la population," in NOUVEAUX MÉMOIRES DE L'ACADÉMIE ROYALE DES SCIENCES ET BELLES-LETTRES DE BRUXELLES, vol. 18, p 3.
He differentiated what we would now call exponent... | Why is the logistic distribution called "logistic"? | The source document for the name "logistic" seems to be this 1844 presentation by P.-F. Verhulst, "Recherches mathématiques sur la loi d'accroissement de la population," in NOUVEAUX MÉMOIRES DE L'ACAD | Why is the logistic distribution called "logistic"?
The source document for the name "logistic" seems to be this 1844 presentation by P.-F. Verhulst, "Recherches mathématiques sur la loi d'accroissement de la population," in NOUVEAUX MÉMOIRES DE L'ACADÉMIE ROYALE DES SCIENCES ET BELLES-LETTRES DE BRUXELLES, vol. 18, p ... | Why is the logistic distribution called "logistic"?
The source document for the name "logistic" seems to be this 1844 presentation by P.-F. Verhulst, "Recherches mathématiques sur la loi d'accroissement de la population," in NOUVEAUX MÉMOIRES DE L'ACAD |
10,232 | Why is the logistic distribution called "logistic"? | (Cross-posted from History of Science and Mathematics: source of “logistic growth”?)
As Ed states, the term logistic is due to the Belgian mathematician Pierre François Verhulst, who invented the logistic growth model, and named it logistic (French: logistique) in his 1845 "Recherches mathématiques sur la loi d'accrois... | Why is the logistic distribution called "logistic"? | (Cross-posted from History of Science and Mathematics: source of “logistic growth”?)
As Ed states, the term logistic is due to the Belgian mathematician Pierre François Verhulst, who invented the logi | Why is the logistic distribution called "logistic"?
(Cross-posted from History of Science and Mathematics: source of “logistic growth”?)
As Ed states, the term logistic is due to the Belgian mathematician Pierre François Verhulst, who invented the logistic growth model, and named it logistic (French: logistique) in his... | Why is the logistic distribution called "logistic"?
(Cross-posted from History of Science and Mathematics: source of “logistic growth”?)
As Ed states, the term logistic is due to the Belgian mathematician Pierre François Verhulst, who invented the logi |
10,233 | Why is the logistic distribution called "logistic"? | The logistic distribution is not a common distribution in analysis, but it ties together the notion of a latent underlying continuous variable which is thresholded in binary outcomes. It turns out that thresholding a logistic RV (to 1 if the RV is greater than some unknown value and 0 otherwise) and calculating a maxim... | Why is the logistic distribution called "logistic"? | The logistic distribution is not a common distribution in analysis, but it ties together the notion of a latent underlying continuous variable which is thresholded in binary outcomes. It turns out tha | Why is the logistic distribution called "logistic"?
The logistic distribution is not a common distribution in analysis, but it ties together the notion of a latent underlying continuous variable which is thresholded in binary outcomes. It turns out that thresholding a logistic RV (to 1 if the RV is greater than some un... | Why is the logistic distribution called "logistic"?
The logistic distribution is not a common distribution in analysis, but it ties together the notion of a latent underlying continuous variable which is thresholded in binary outcomes. It turns out tha |
10,234 | How to compute the standard errors of a logistic regression's coefficients | Does your software give you a parameter covariance (or variance-covariance) matrix? If so, the standard errors are the square root of the diagonal of that matrix. You probably want to consult a textbook (or google for university lecture notes) for how to get the $V_\beta$ matrix for linear and generalized linear mode... | How to compute the standard errors of a logistic regression's coefficients | Does your software give you a parameter covariance (or variance-covariance) matrix? If so, the standard errors are the square root of the diagonal of that matrix. You probably want to consult a text | How to compute the standard errors of a logistic regression's coefficients
Does your software give you a parameter covariance (or variance-covariance) matrix? If so, the standard errors are the square root of the diagonal of that matrix. You probably want to consult a textbook (or google for university lecture notes)... | How to compute the standard errors of a logistic regression's coefficients
Does your software give you a parameter covariance (or variance-covariance) matrix? If so, the standard errors are the square root of the diagonal of that matrix. You probably want to consult a text |
10,235 | How to compute the standard errors of a logistic regression's coefficients | The standard errors of the model coefficients are the square roots of the diagonal entries of the covariance matrix. Consider the following:
Design matrix:
$\textbf{X = }\begin{bmatrix} 1 & x_{1,1} & \ldots & x_{1,p} \\ 1 & x_{2,1} & \ldots & x_{2,p} \\ \vdots & \vdots & \ddots & \vdots \\ 1 & x_{n,1} & \ldots & x_{... | How to compute the standard errors of a logistic regression's coefficients | The standard errors of the model coefficients are the square roots of the diagonal entries of the covariance matrix. Consider the following:
Design matrix:
$\textbf{X = }\begin{bmatrix} 1 & x_{1,1} | How to compute the standard errors of a logistic regression's coefficients
The standard errors of the model coefficients are the square roots of the diagonal entries of the covariance matrix. Consider the following:
Design matrix:
$\textbf{X = }\begin{bmatrix} 1 & x_{1,1} & \ldots & x_{1,p} \\ 1 & x_{2,1} & \ldots &... | How to compute the standard errors of a logistic regression's coefficients
The standard errors of the model coefficients are the square roots of the diagonal entries of the covariance matrix. Consider the following:
Design matrix:
$\textbf{X = }\begin{bmatrix} 1 & x_{1,1} |
10,236 | How to compute the standard errors of a logistic regression's coefficients | If you're interested in doing inference, then you'll probably want to have a look at statsmodels. Standard errors and common statistical tests are available. Here's a logistic regression example. | How to compute the standard errors of a logistic regression's coefficients | If you're interested in doing inference, then you'll probably want to have a look at statsmodels. Standard errors and common statistical tests are available. Here's a logistic regression example. | How to compute the standard errors of a logistic regression's coefficients
If you're interested in doing inference, then you'll probably want to have a look at statsmodels. Standard errors and common statistical tests are available. Here's a logistic regression example. | How to compute the standard errors of a logistic regression's coefficients
If you're interested in doing inference, then you'll probably want to have a look at statsmodels. Standard errors and common statistical tests are available. Here's a logistic regression example. |
10,237 | How to compute the standard errors of a logistic regression's coefficients | Building upon the fantastic work of @j_sack I have two impulses to build on his code:
Since predict_proba is giving you the probability for n-classes, the result is an nD-array and that is why one should(?) specify the class of interest, e.g.:
predProbs[:,0]
(check class of interest with resLogit.classes_)
The diago... | How to compute the standard errors of a logistic regression's coefficients | Building upon the fantastic work of @j_sack I have two impulses to build on his code:
Since predict_proba is giving you the probability for n-classes, the result is an nD-array and that is why one sh | How to compute the standard errors of a logistic regression's coefficients
Building upon the fantastic work of @j_sack I have two impulses to build on his code:
Since predict_proba is giving you the probability for n-classes, the result is an nD-array and that is why one should(?) specify the class of interest, e.g.:
... | How to compute the standard errors of a logistic regression's coefficients
Building upon the fantastic work of @j_sack I have two impulses to build on his code:
Since predict_proba is giving you the probability for n-classes, the result is an nD-array and that is why one sh |
10,238 | Is Shapiro–Wilk the best normality test? Why might it be better than other tests like Anderson-Darling? | First a general comment: Note that the Anderson-Darling test is for completely specified distributions, while the Shapiro-Wilk is for normals with any mean and variance. However, as noted in D'Agostino & Stephens$^{[1]}$ the Anderson-Darling adapts in a very convenient way to the estimation case, akin to (but converges... | Is Shapiro–Wilk the best normality test? Why might it be better than other tests like Anderson-Darli | First a general comment: Note that the Anderson-Darling test is for completely specified distributions, while the Shapiro-Wilk is for normals with any mean and variance. However, as noted in D'Agostin | Is Shapiro–Wilk the best normality test? Why might it be better than other tests like Anderson-Darling?
First a general comment: Note that the Anderson-Darling test is for completely specified distributions, while the Shapiro-Wilk is for normals with any mean and variance. However, as noted in D'Agostino & Stephens$^{[... | Is Shapiro–Wilk the best normality test? Why might it be better than other tests like Anderson-Darli
First a general comment: Note that the Anderson-Darling test is for completely specified distributions, while the Shapiro-Wilk is for normals with any mean and variance. However, as noted in D'Agostin |
10,239 | Is Shapiro–Wilk the best normality test? Why might it be better than other tests like Anderson-Darling? | Clearly the comparison that you read did not include SnowsPenultimateNormalityTest (http://cran.r-project.org/web/packages/TeachingDemos/TeachingDemos.pdf) since it has the highest possible power across all alternatives. So it should be considered "Best" if power is the only consideration (Note that my opinions are c... | Is Shapiro–Wilk the best normality test? Why might it be better than other tests like Anderson-Darli | Clearly the comparison that you read did not include SnowsPenultimateNormalityTest (http://cran.r-project.org/web/packages/TeachingDemos/TeachingDemos.pdf) since it has the highest possible power acr | Is Shapiro–Wilk the best normality test? Why might it be better than other tests like Anderson-Darling?
Clearly the comparison that you read did not include SnowsPenultimateNormalityTest (http://cran.r-project.org/web/packages/TeachingDemos/TeachingDemos.pdf) since it has the highest possible power across all alternat... | Is Shapiro–Wilk the best normality test? Why might it be better than other tests like Anderson-Darli
Clearly the comparison that you read did not include SnowsPenultimateNormalityTest (http://cran.r-project.org/web/packages/TeachingDemos/TeachingDemos.pdf) since it has the highest possible power acr |
10,240 | Is Shapiro–Wilk the best normality test? Why might it be better than other tests like Anderson-Darling? | A more serious answer to further this question and especially @silverfish's continued interest. One approach to answering questions like this is to run some simulations to compare. Below is some R code that simulates data under various alternatives and does several of the normality tests and compares the power (and a... | Is Shapiro–Wilk the best normality test? Why might it be better than other tests like Anderson-Darli | A more serious answer to further this question and especially @silverfish's continued interest. One approach to answering questions like this is to run some simulations to compare. Below is some R c | Is Shapiro–Wilk the best normality test? Why might it be better than other tests like Anderson-Darling?
A more serious answer to further this question and especially @silverfish's continued interest. One approach to answering questions like this is to run some simulations to compare. Below is some R code that simulat... | Is Shapiro–Wilk the best normality test? Why might it be better than other tests like Anderson-Darli
A more serious answer to further this question and especially @silverfish's continued interest. One approach to answering questions like this is to run some simulations to compare. Below is some R c |
10,241 | Is Shapiro–Wilk the best normality test? Why might it be better than other tests like Anderson-Darling? | I'm late to the party, but will answer with references to the published peer-reviewed research. The reason why I don't answer Yes/No to OP's question is that it is more complicated than it may seem. There isn't one test which would be the most powerful for samples coming from any distribution with or without outliers. ... | Is Shapiro–Wilk the best normality test? Why might it be better than other tests like Anderson-Darli | I'm late to the party, but will answer with references to the published peer-reviewed research. The reason why I don't answer Yes/No to OP's question is that it is more complicated than it may seem. T | Is Shapiro–Wilk the best normality test? Why might it be better than other tests like Anderson-Darling?
I'm late to the party, but will answer with references to the published peer-reviewed research. The reason why I don't answer Yes/No to OP's question is that it is more complicated than it may seem. There isn't one t... | Is Shapiro–Wilk the best normality test? Why might it be better than other tests like Anderson-Darli
I'm late to the party, but will answer with references to the published peer-reviewed research. The reason why I don't answer Yes/No to OP's question is that it is more complicated than it may seem. T |
10,242 | Practically speaking, how do people handle ANOVA when the data doesn't quite meet assumptions? | I'm trying to figure out how actual working analysts handle data that doesn't quite meet the assumptions.
It depends on my needs, which assumptions are violated, in what way, how badly, how much that affects the inference, and sometimes on the sample size.
I'm running analysis on grouped data from trees in four grou... | Practically speaking, how do people handle ANOVA when the data doesn't quite meet assumptions? | I'm trying to figure out how actual working analysts handle data that doesn't quite meet the assumptions.
It depends on my needs, which assumptions are violated, in what way, how badly, how much tha | Practically speaking, how do people handle ANOVA when the data doesn't quite meet assumptions?
I'm trying to figure out how actual working analysts handle data that doesn't quite meet the assumptions.
It depends on my needs, which assumptions are violated, in what way, how badly, how much that affects the inference, ... | Practically speaking, how do people handle ANOVA when the data doesn't quite meet assumptions?
I'm trying to figure out how actual working analysts handle data that doesn't quite meet the assumptions.
It depends on my needs, which assumptions are violated, in what way, how badly, how much tha |
10,243 | Practically speaking, how do people handle ANOVA when the data doesn't quite meet assumptions? | It’s actually not very difficult to handle heteroscedasticity in simple linear models (e.g., one- or two-way ANOVA-like models).
Robustness of ANOVA
First, as others have note, the ANOVA is amazingly robust to deviations from the assumption of equal variances, especially if you have approximately balanced data (equal n... | Practically speaking, how do people handle ANOVA when the data doesn't quite meet assumptions? | It’s actually not very difficult to handle heteroscedasticity in simple linear models (e.g., one- or two-way ANOVA-like models).
Robustness of ANOVA
First, as others have note, the ANOVA is amazingly | Practically speaking, how do people handle ANOVA when the data doesn't quite meet assumptions?
It’s actually not very difficult to handle heteroscedasticity in simple linear models (e.g., one- or two-way ANOVA-like models).
Robustness of ANOVA
First, as others have note, the ANOVA is amazingly robust to deviations from... | Practically speaking, how do people handle ANOVA when the data doesn't quite meet assumptions?
It’s actually not very difficult to handle heteroscedasticity in simple linear models (e.g., one- or two-way ANOVA-like models).
Robustness of ANOVA
First, as others have note, the ANOVA is amazingly |
10,244 | Practically speaking, how do people handle ANOVA when the data doesn't quite meet assumptions? | There may indeed be some transformation of your data that produces an acceptably normal distribution. Of course, now your inference is about about the transformed data, not the not-transformed data.
Assuming you are talking about a oneway ANOVA, the Kruskal-Wallis test is an appropriate nonparametric analog to the onew... | Practically speaking, how do people handle ANOVA when the data doesn't quite meet assumptions? | There may indeed be some transformation of your data that produces an acceptably normal distribution. Of course, now your inference is about about the transformed data, not the not-transformed data.
A | Practically speaking, how do people handle ANOVA when the data doesn't quite meet assumptions?
There may indeed be some transformation of your data that produces an acceptably normal distribution. Of course, now your inference is about about the transformed data, not the not-transformed data.
Assuming you are talking a... | Practically speaking, how do people handle ANOVA when the data doesn't quite meet assumptions?
There may indeed be some transformation of your data that produces an acceptably normal distribution. Of course, now your inference is about about the transformed data, not the not-transformed data.
A |
10,245 | Practically speaking, how do people handle ANOVA when the data doesn't quite meet assumptions? | It sounds to me as though you are doing the footwork and are trying your best but are worried your efforts will not be good enough to get your paper past the reviewers. Very much a real-world problem. I think all researchers struggle with analyses that appear to be borderline or even frankly breaching assumptions from ... | Practically speaking, how do people handle ANOVA when the data doesn't quite meet assumptions? | It sounds to me as though you are doing the footwork and are trying your best but are worried your efforts will not be good enough to get your paper past the reviewers. Very much a real-world problem. | Practically speaking, how do people handle ANOVA when the data doesn't quite meet assumptions?
It sounds to me as though you are doing the footwork and are trying your best but are worried your efforts will not be good enough to get your paper past the reviewers. Very much a real-world problem. I think all researchers ... | Practically speaking, how do people handle ANOVA when the data doesn't quite meet assumptions?
It sounds to me as though you are doing the footwork and are trying your best but are worried your efforts will not be good enough to get your paper past the reviewers. Very much a real-world problem. |
10,246 | Why are neural networks described as black-box models? | A neural network is a black box in the sense that while it can approximate any function, studying its structure won't give you any insights on the structure of the function being approximated.
As an example, one common use of neural networks on the banking business is to classify loaners on "good payers" and "bad payer... | Why are neural networks described as black-box models? | A neural network is a black box in the sense that while it can approximate any function, studying its structure won't give you any insights on the structure of the function being approximated.
As an e | Why are neural networks described as black-box models?
A neural network is a black box in the sense that while it can approximate any function, studying its structure won't give you any insights on the structure of the function being approximated.
As an example, one common use of neural networks on the banking business... | Why are neural networks described as black-box models?
A neural network is a black box in the sense that while it can approximate any function, studying its structure won't give you any insights on the structure of the function being approximated.
As an e |
10,247 | Why are neural networks described as black-box models? | Google has published Inception-v3. It's a Neural Network (NN) for image classification algorithm (telling a cat from a dog).
In the paper they talk about the current state of image classification
For example, GoogleNet employed only 5 million parameters, which represented a 12x reduction with respect to its predecesso... | Why are neural networks described as black-box models? | Google has published Inception-v3. It's a Neural Network (NN) for image classification algorithm (telling a cat from a dog).
In the paper they talk about the current state of image classification
For | Why are neural networks described as black-box models?
Google has published Inception-v3. It's a Neural Network (NN) for image classification algorithm (telling a cat from a dog).
In the paper they talk about the current state of image classification
For example, GoogleNet employed only 5 million parameters, which rep... | Why are neural networks described as black-box models?
Google has published Inception-v3. It's a Neural Network (NN) for image classification algorithm (telling a cat from a dog).
In the paper they talk about the current state of image classification
For |
10,248 | Simplify sum of combinations with same n, all possible values of k | See
http://en.wikipedia.org/wiki/Combination#Number_of_k-combinations_for_all_k
which says
$$ \sum_{k=0}^{n} \binom{n}{k} = 2^n$$
You can prove this using the binomial theorem where $x=y=1$.
Now, since $\binom{n}{0} = 1$ for any $n$, it follows that
$$ \sum_{k=1}^{n} \binom{n}{k} = 2^n - 1$$
In your case $n=8$, ... | Simplify sum of combinations with same n, all possible values of k | See
http://en.wikipedia.org/wiki/Combination#Number_of_k-combinations_for_all_k
which says
$$ \sum_{k=0}^{n} \binom{n}{k} = 2^n$$
You can prove this using the binomial theorem where $x=y=1$.
Now, | Simplify sum of combinations with same n, all possible values of k
See
http://en.wikipedia.org/wiki/Combination#Number_of_k-combinations_for_all_k
which says
$$ \sum_{k=0}^{n} \binom{n}{k} = 2^n$$
You can prove this using the binomial theorem where $x=y=1$.
Now, since $\binom{n}{0} = 1$ for any $n$, it follows tha... | Simplify sum of combinations with same n, all possible values of k
See
http://en.wikipedia.org/wiki/Combination#Number_of_k-combinations_for_all_k
which says
$$ \sum_{k=0}^{n} \binom{n}{k} = 2^n$$
You can prove this using the binomial theorem where $x=y=1$.
Now, |
10,249 | Simplify sum of combinations with same n, all possible values of k | Homework?
Hint:
Remember the binomial theorem:
$$
(x+y)^n = \sum_{k=0}^{n}\binom{n}{k}x^ky^{n-k}
$$
Now, if you could just find x and y so that $x^ky^{n-k}$ is constant... | Simplify sum of combinations with same n, all possible values of k | Homework?
Hint:
Remember the binomial theorem:
$$
(x+y)^n = \sum_{k=0}^{n}\binom{n}{k}x^ky^{n-k}
$$
Now, if you could just find x and y so that $x^ky^{n-k}$ is constant... | Simplify sum of combinations with same n, all possible values of k
Homework?
Hint:
Remember the binomial theorem:
$$
(x+y)^n = \sum_{k=0}^{n}\binom{n}{k}x^ky^{n-k}
$$
Now, if you could just find x and y so that $x^ky^{n-k}$ is constant... | Simplify sum of combinations with same n, all possible values of k
Homework?
Hint:
Remember the binomial theorem:
$$
(x+y)^n = \sum_{k=0}^{n}\binom{n}{k}x^ky^{n-k}
$$
Now, if you could just find x and y so that $x^ky^{n-k}$ is constant... |
10,250 | Why do you need to scale data in KNN | The k-nearest neighbor algorithm relies on majority voting based on class membership of 'k' nearest samples for a given test point. The nearness of samples is typically based on Euclidean distance.
Consider a simple two class classification problem, where a Class 1 sample is chosen (black) along with it's 10-nearest n... | Why do you need to scale data in KNN | The k-nearest neighbor algorithm relies on majority voting based on class membership of 'k' nearest samples for a given test point. The nearness of samples is typically based on Euclidean distance.
C | Why do you need to scale data in KNN
The k-nearest neighbor algorithm relies on majority voting based on class membership of 'k' nearest samples for a given test point. The nearness of samples is typically based on Euclidean distance.
Consider a simple two class classification problem, where a Class 1 sample is chosen... | Why do you need to scale data in KNN
The k-nearest neighbor algorithm relies on majority voting based on class membership of 'k' nearest samples for a given test point. The nearness of samples is typically based on Euclidean distance.
C |
10,251 | Why do you need to scale data in KNN | Suppose you had a dataset (m "examples" by n "features") and all but one feature dimension had values strictly between 0 and 1, while a single feature dimension had values that range from -1000000 to 1000000. When taking the euclidean distance between pairs of "examples", the values of the feature dimensions that range... | Why do you need to scale data in KNN | Suppose you had a dataset (m "examples" by n "features") and all but one feature dimension had values strictly between 0 and 1, while a single feature dimension had values that range from -1000000 to | Why do you need to scale data in KNN
Suppose you had a dataset (m "examples" by n "features") and all but one feature dimension had values strictly between 0 and 1, while a single feature dimension had values that range from -1000000 to 1000000. When taking the euclidean distance between pairs of "examples", the values... | Why do you need to scale data in KNN
Suppose you had a dataset (m "examples" by n "features") and all but one feature dimension had values strictly between 0 and 1, while a single feature dimension had values that range from -1000000 to |
10,252 | Why do you need to scale data in KNN | If the scale of features is very different then normalization is required. This is because the distance calculation done in KNN uses feature values.
When the one feature values are large than other, that feature will dominate the distance hence the outcome of the KNN.
see example on gist.github.com | Why do you need to scale data in KNN | If the scale of features is very different then normalization is required. This is because the distance calculation done in KNN uses feature values.
When the one feature values are large than other, t | Why do you need to scale data in KNN
If the scale of features is very different then normalization is required. This is because the distance calculation done in KNN uses feature values.
When the one feature values are large than other, that feature will dominate the distance hence the outcome of the KNN.
see example on... | Why do you need to scale data in KNN
If the scale of features is very different then normalization is required. This is because the distance calculation done in KNN uses feature values.
When the one feature values are large than other, t |
10,253 | Why do you need to scale data in KNN | The larger the scale a particular feature has relative to other features, the more weight that feature will have in distance calculations.
Scaling all features to a common scale gives each feature an equal weight in distance calculations.
But notice that scaling introduces a particular weighting on the distance functio... | Why do you need to scale data in KNN | The larger the scale a particular feature has relative to other features, the more weight that feature will have in distance calculations.
Scaling all features to a common scale gives each feature an | Why do you need to scale data in KNN
The larger the scale a particular feature has relative to other features, the more weight that feature will have in distance calculations.
Scaling all features to a common scale gives each feature an equal weight in distance calculations.
But notice that scaling introduces a particu... | Why do you need to scale data in KNN
The larger the scale a particular feature has relative to other features, the more weight that feature will have in distance calculations.
Scaling all features to a common scale gives each feature an |
10,254 | Polynomial contrasts for regression | Just to recap (and in case the OP hyperlinks fail in the future), we are looking at a dataset hsb2 as such:
id female race ses schtyp prog read write math science socst
1 70 0 4 1 1 1 57 52 41 47 57
2 121 1 4 2 1 3 68 59 53 63 61
...
199 118 ... | Polynomial contrasts for regression | Just to recap (and in case the OP hyperlinks fail in the future), we are looking at a dataset hsb2 as such:
id female race ses schtyp prog read write math science socst
1 70 0 4 1 | Polynomial contrasts for regression
Just to recap (and in case the OP hyperlinks fail in the future), we are looking at a dataset hsb2 as such:
id female race ses schtyp prog read write math science socst
1 70 0 4 1 1 1 57 52 41 47 57
2 121 1 4 2 1 3 68 ... | Polynomial contrasts for regression
Just to recap (and in case the OP hyperlinks fail in the future), we are looking at a dataset hsb2 as such:
id female race ses schtyp prog read write math science socst
1 70 0 4 1 |
10,255 | Polynomial contrasts for regression | I will use your example to explain how it works. Using polynomial contrasts with four groups yields following.
\begin{align}
E\,write_1 &= \mu -0.67L + 0.5Q -0.22C\\
E\,write_2 &= \mu -0.22L -0.5Q + 0.67C\\
E\,write_3 &= \mu + 0.22L -0.5Q -0.67C\\
E\,write_4 &= \mu + 0.67L + 0.5Q + 0.22C
\end{align}
Where first equati... | Polynomial contrasts for regression | I will use your example to explain how it works. Using polynomial contrasts with four groups yields following.
\begin{align}
E\,write_1 &= \mu -0.67L + 0.5Q -0.22C\\
E\,write_2 &= \mu -0.22L -0.5Q + | Polynomial contrasts for regression
I will use your example to explain how it works. Using polynomial contrasts with four groups yields following.
\begin{align}
E\,write_1 &= \mu -0.67L + 0.5Q -0.22C\\
E\,write_2 &= \mu -0.22L -0.5Q + 0.67C\\
E\,write_3 &= \mu + 0.22L -0.5Q -0.67C\\
E\,write_4 &= \mu + 0.67L + 0.5Q + ... | Polynomial contrasts for regression
I will use your example to explain how it works. Using polynomial contrasts with four groups yields following.
\begin{align}
E\,write_1 &= \mu -0.67L + 0.5Q -0.22C\\
E\,write_2 &= \mu -0.22L -0.5Q + |
10,256 | Is there always a maximizer for any MLE problem? | Perhaps the engineer had in mind canonical exponential families: in their natural parametrization, the parameter space is convex and the log-likelihood is concave (see Thm 1.6.3 in Bickel & Doksum's Mathematical Statistics, Volume 1). Also, under some mild technical conditions (basically that the model be "full rank",... | Is there always a maximizer for any MLE problem? | Perhaps the engineer had in mind canonical exponential families: in their natural parametrization, the parameter space is convex and the log-likelihood is concave (see Thm 1.6.3 in Bickel & Doksum's M | Is there always a maximizer for any MLE problem?
Perhaps the engineer had in mind canonical exponential families: in their natural parametrization, the parameter space is convex and the log-likelihood is concave (see Thm 1.6.3 in Bickel & Doksum's Mathematical Statistics, Volume 1). Also, under some mild technical con... | Is there always a maximizer for any MLE problem?
Perhaps the engineer had in mind canonical exponential families: in their natural parametrization, the parameter space is convex and the log-likelihood is concave (see Thm 1.6.3 in Bickel & Doksum's M |
10,257 | Is there always a maximizer for any MLE problem? | Likelihood function often attains maximum for estimation of parameter of interest. Nevertheless, sometime MLE does not exist, such as for Gaussian mixture distribution or nonparametric functions, which has more than one peaks (bi or multi -modal). I often face the problem of estimating population genetics unknown param... | Is there always a maximizer for any MLE problem? | Likelihood function often attains maximum for estimation of parameter of interest. Nevertheless, sometime MLE does not exist, such as for Gaussian mixture distribution or nonparametric functions, whic | Is there always a maximizer for any MLE problem?
Likelihood function often attains maximum for estimation of parameter of interest. Nevertheless, sometime MLE does not exist, such as for Gaussian mixture distribution or nonparametric functions, which has more than one peaks (bi or multi -modal). I often face the proble... | Is there always a maximizer for any MLE problem?
Likelihood function often attains maximum for estimation of parameter of interest. Nevertheless, sometime MLE does not exist, such as for Gaussian mixture distribution or nonparametric functions, whic |
10,258 | Is there always a maximizer for any MLE problem? | Perhaps someone will find the following simple example useful.
Consider flipping a coin once. Let $\theta$ denote the probability of heads. If it is known that the coin can come up either heads or tails then $\theta \in (0,1)$. Since the set $(0,1)$ is open, the parameter space is not compact. The likelihood for $\the... | Is there always a maximizer for any MLE problem? | Perhaps someone will find the following simple example useful.
Consider flipping a coin once. Let $\theta$ denote the probability of heads. If it is known that the coin can come up either heads or ta | Is there always a maximizer for any MLE problem?
Perhaps someone will find the following simple example useful.
Consider flipping a coin once. Let $\theta$ denote the probability of heads. If it is known that the coin can come up either heads or tails then $\theta \in (0,1)$. Since the set $(0,1)$ is open, the paramet... | Is there always a maximizer for any MLE problem?
Perhaps someone will find the following simple example useful.
Consider flipping a coin once. Let $\theta$ denote the probability of heads. If it is known that the coin can come up either heads or ta |
10,259 | Is there always a maximizer for any MLE problem? | I admit I may be missing something, but --
If this is an estimation problem, and the goal is to estimate an unknown parameter, and the parameter is known to come from some closed and bounded set, and the likelihood function is continuous, then there has to exist a value for this parameter that maximizes the likelihood ... | Is there always a maximizer for any MLE problem? | I admit I may be missing something, but --
If this is an estimation problem, and the goal is to estimate an unknown parameter, and the parameter is known to come from some closed and bounded set, and | Is there always a maximizer for any MLE problem?
I admit I may be missing something, but --
If this is an estimation problem, and the goal is to estimate an unknown parameter, and the parameter is known to come from some closed and bounded set, and the likelihood function is continuous, then there has to exist a value ... | Is there always a maximizer for any MLE problem?
I admit I may be missing something, but --
If this is an estimation problem, and the goal is to estimate an unknown parameter, and the parameter is known to come from some closed and bounded set, and |
10,260 | Is there always a maximizer for any MLE problem? | Re: a comment by @Cardinal.
I suspect those that referred to Cardinal's comment below the original question, missed its subtle mischief: the OP makes a problematic statement, because it refers to the likelihood as a cost function (which we usually want to minimize), and then talks about concavity and maximization. The ... | Is there always a maximizer for any MLE problem? | Re: a comment by @Cardinal.
I suspect those that referred to Cardinal's comment below the original question, missed its subtle mischief: the OP makes a problematic statement, because it refers to the | Is there always a maximizer for any MLE problem?
Re: a comment by @Cardinal.
I suspect those that referred to Cardinal's comment below the original question, missed its subtle mischief: the OP makes a problematic statement, because it refers to the likelihood as a cost function (which we usually want to minimize), and ... | Is there always a maximizer for any MLE problem?
Re: a comment by @Cardinal.
I suspect those that referred to Cardinal's comment below the original question, missed its subtle mischief: the OP makes a problematic statement, because it refers to the |
10,261 | Why is a T distribution used for hypothesis testing a linear regression coefficient? | To understand why we use the t-distribution, you need to know what is the underlying distribution of $\widehat{\beta}$ and of the Residual sum of squares ($RSS$) as these two put together will give you the t-distribution.
The easier part is the distribution of $\widehat{\beta}$ which is a normal distribution - to see ... | Why is a T distribution used for hypothesis testing a linear regression coefficient? | To understand why we use the t-distribution, you need to know what is the underlying distribution of $\widehat{\beta}$ and of the Residual sum of squares ($RSS$) as these two put together will give yo | Why is a T distribution used for hypothesis testing a linear regression coefficient?
To understand why we use the t-distribution, you need to know what is the underlying distribution of $\widehat{\beta}$ and of the Residual sum of squares ($RSS$) as these two put together will give you the t-distribution.
The easier p... | Why is a T distribution used for hypothesis testing a linear regression coefficient?
To understand why we use the t-distribution, you need to know what is the underlying distribution of $\widehat{\beta}$ and of the Residual sum of squares ($RSS$) as these two put together will give yo |
10,262 | Why is a T distribution used for hypothesis testing a linear regression coefficient? | The answer is actually very simple: you use t-distribution because it was pretty much designed specifically for this purpose.
Ok, the nuance here is that it wasn't designed specifically for the linear regression. Gosset came up with distribution of sample that was drawn from the population. For instance, you draw a sam... | Why is a T distribution used for hypothesis testing a linear regression coefficient? | The answer is actually very simple: you use t-distribution because it was pretty much designed specifically for this purpose.
Ok, the nuance here is that it wasn't designed specifically for the linear | Why is a T distribution used for hypothesis testing a linear regression coefficient?
The answer is actually very simple: you use t-distribution because it was pretty much designed specifically for this purpose.
Ok, the nuance here is that it wasn't designed specifically for the linear regression. Gosset came up with di... | Why is a T distribution used for hypothesis testing a linear regression coefficient?
The answer is actually very simple: you use t-distribution because it was pretty much designed specifically for this purpose.
Ok, the nuance here is that it wasn't designed specifically for the linear |
10,263 | How to compare models on the basis of AIC? | One does not compare the absolute values of two AICs (which can be like $\sim 100$ but also $\sim 1000000$), but considers their difference:
$$\Delta_i=AIC_i-AIC_{\rm min},$$
where $AIC_i$ is the AIC of the $i$-th model, and $AIC_{\rm min}$ is the lowest AIC one obtains among the set of models examined (i.e., the prefe... | How to compare models on the basis of AIC? | One does not compare the absolute values of two AICs (which can be like $\sim 100$ but also $\sim 1000000$), but considers their difference:
$$\Delta_i=AIC_i-AIC_{\rm min},$$
where $AIC_i$ is the AIC | How to compare models on the basis of AIC?
One does not compare the absolute values of two AICs (which can be like $\sim 100$ but also $\sim 1000000$), but considers their difference:
$$\Delta_i=AIC_i-AIC_{\rm min},$$
where $AIC_i$ is the AIC of the $i$-th model, and $AIC_{\rm min}$ is the lowest AIC one obtains among ... | How to compare models on the basis of AIC?
One does not compare the absolute values of two AICs (which can be like $\sim 100$ but also $\sim 1000000$), but considers their difference:
$$\Delta_i=AIC_i-AIC_{\rm min},$$
where $AIC_i$ is the AIC |
10,264 | Construction of Dirichlet distribution with Gamma distribution | Jacobians--the absolute determinants of the change of variable function--appear formidable and can be complicated. Nevertheless, they are an essential and unavoidable part of the calculation of a multivariate change of variable. It would seem there's nothing for it but to write down a $k+1$ by $k+1$ matrix of derivat... | Construction of Dirichlet distribution with Gamma distribution | Jacobians--the absolute determinants of the change of variable function--appear formidable and can be complicated. Nevertheless, they are an essential and unavoidable part of the calculation of a mul | Construction of Dirichlet distribution with Gamma distribution
Jacobians--the absolute determinants of the change of variable function--appear formidable and can be complicated. Nevertheless, they are an essential and unavoidable part of the calculation of a multivariate change of variable. It would seem there's noth... | Construction of Dirichlet distribution with Gamma distribution
Jacobians--the absolute determinants of the change of variable function--appear formidable and can be complicated. Nevertheless, they are an essential and unavoidable part of the calculation of a mul |
10,265 | Caret and coefficients (glmnet) | Lets say your caret model is called "model". You can access the final glmnet model with model$finalModel. You can then call coef(model$finalModel), etc. You will have to select a value of lambda for which you want coefficients, such as coef(model$finalModel, model$bestTune$.lambda).
Take a look at the summaryFunction... | Caret and coefficients (glmnet) | Lets say your caret model is called "model". You can access the final glmnet model with model$finalModel. You can then call coef(model$finalModel), etc. You will have to select a value of lambda for | Caret and coefficients (glmnet)
Lets say your caret model is called "model". You can access the final glmnet model with model$finalModel. You can then call coef(model$finalModel), etc. You will have to select a value of lambda for which you want coefficients, such as coef(model$finalModel, model$bestTune$.lambda).
Ta... | Caret and coefficients (glmnet)
Lets say your caret model is called "model". You can access the final glmnet model with model$finalModel. You can then call coef(model$finalModel), etc. You will have to select a value of lambda for |
10,266 | What is the difference between $E(X|Y)$ and $E(X|Y=y)$? | Roughly speaking, the difference between $E(X \mid Y)$ and $E(X \mid Y = y)$ is that the former is a random variable, whereas the latter is (in some sense) a realization of $E(X \mid Y)$. For example, if $$(X, Y) \sim \mathcal N\left(\mathbf 0, \begin{pmatrix} 1 & \rho \\ \rho & 1 \end{pmatrix}\right)$$
then $E(X \mid ... | What is the difference between $E(X|Y)$ and $E(X|Y=y)$? | Roughly speaking, the difference between $E(X \mid Y)$ and $E(X \mid Y = y)$ is that the former is a random variable, whereas the latter is (in some sense) a realization of $E(X \mid Y)$. For example, | What is the difference between $E(X|Y)$ and $E(X|Y=y)$?
Roughly speaking, the difference between $E(X \mid Y)$ and $E(X \mid Y = y)$ is that the former is a random variable, whereas the latter is (in some sense) a realization of $E(X \mid Y)$. For example, if $$(X, Y) \sim \mathcal N\left(\mathbf 0, \begin{pmatrix} 1 &... | What is the difference between $E(X|Y)$ and $E(X|Y=y)$?
Roughly speaking, the difference between $E(X \mid Y)$ and $E(X \mid Y = y)$ is that the former is a random variable, whereas the latter is (in some sense) a realization of $E(X \mid Y)$. For example, |
10,267 | What is the difference between $E(X|Y)$ and $E(X|Y=y)$? | Suppose that $X$ and $Y$ are random variables.
Let $y_0$ be a fixed real number, say $y_0 = 1$. Then,
$E[X\mid Y=y_0]= E[X\mid Y = 1]$ is a
number: it is the conditional expected value of $X$ given that $Y$ has
value $1$. Now, note for some other fixed real number $y_1$, say $y_1=1.5$, $E[X\mid Y = y_1] = E[X\mid Y ... | What is the difference between $E(X|Y)$ and $E(X|Y=y)$? | Suppose that $X$ and $Y$ are random variables.
Let $y_0$ be a fixed real number, say $y_0 = 1$. Then,
$E[X\mid Y=y_0]= E[X\mid Y = 1]$ is a
number: it is the conditional expected value of $X$ given | What is the difference between $E(X|Y)$ and $E(X|Y=y)$?
Suppose that $X$ and $Y$ are random variables.
Let $y_0$ be a fixed real number, say $y_0 = 1$. Then,
$E[X\mid Y=y_0]= E[X\mid Y = 1]$ is a
number: it is the conditional expected value of $X$ given that $Y$ has
value $1$. Now, note for some other fixed real numb... | What is the difference between $E(X|Y)$ and $E(X|Y=y)$?
Suppose that $X$ and $Y$ are random variables.
Let $y_0$ be a fixed real number, say $y_0 = 1$. Then,
$E[X\mid Y=y_0]= E[X\mid Y = 1]$ is a
number: it is the conditional expected value of $X$ given |
10,268 | What is the difference between $E(X|Y)$ and $E(X|Y=y)$? | $E(X|Y)$ is the expectation of a random variable: the expectation of $X$ conditional on $Y$.
$E(X|Y=y)$, on the other hand, is a particular value: the expected value of $X$ when $Y=y$.
Think of it this way: let $X$ represent the caloric intake and $Y$ represent height. $E(X|Y)$ is then the caloric intake, conditional o... | What is the difference between $E(X|Y)$ and $E(X|Y=y)$? | $E(X|Y)$ is the expectation of a random variable: the expectation of $X$ conditional on $Y$.
$E(X|Y=y)$, on the other hand, is a particular value: the expected value of $X$ when $Y=y$.
Think of it thi | What is the difference between $E(X|Y)$ and $E(X|Y=y)$?
$E(X|Y)$ is the expectation of a random variable: the expectation of $X$ conditional on $Y$.
$E(X|Y=y)$, on the other hand, is a particular value: the expected value of $X$ when $Y=y$.
Think of it this way: let $X$ represent the caloric intake and $Y$ represent he... | What is the difference between $E(X|Y)$ and $E(X|Y=y)$?
$E(X|Y)$ is the expectation of a random variable: the expectation of $X$ conditional on $Y$.
$E(X|Y=y)$, on the other hand, is a particular value: the expected value of $X$ when $Y=y$.
Think of it thi |
10,269 | What is the difference between $E(X|Y)$ and $E(X|Y=y)$? | $E(X|Y)$ is expected value of values of $X$ given values of $Y$
$E(X|Y=y)$ is expected value of $X$ given the value of $Y$ is $y$
Generally $P(X|Y)$ is probability of values $X$ given values $Y$, but you can get more precise and say $P(X=x|Y=y)$, i.e. probability of value $x$ from all $X$'s given the $y$'th value of $Y... | What is the difference between $E(X|Y)$ and $E(X|Y=y)$? | $E(X|Y)$ is expected value of values of $X$ given values of $Y$
$E(X|Y=y)$ is expected value of $X$ given the value of $Y$ is $y$
Generally $P(X|Y)$ is probability of values $X$ given values $Y$, but | What is the difference between $E(X|Y)$ and $E(X|Y=y)$?
$E(X|Y)$ is expected value of values of $X$ given values of $Y$
$E(X|Y=y)$ is expected value of $X$ given the value of $Y$ is $y$
Generally $P(X|Y)$ is probability of values $X$ given values $Y$, but you can get more precise and say $P(X=x|Y=y)$, i.e. probability ... | What is the difference between $E(X|Y)$ and $E(X|Y=y)$?
$E(X|Y)$ is expected value of values of $X$ given values of $Y$
$E(X|Y=y)$ is expected value of $X$ given the value of $Y$ is $y$
Generally $P(X|Y)$ is probability of values $X$ given values $Y$, but |
10,270 | Symbolic computation in R? | Yes. There is the Ryacas package which is hosted on Google Code here. Ryacas has recently been expanded/converted to the rMathpiper package which is hosted here. I have used Ryacas and it is straightforward, but you will need to install Yacas in order for it to work (Yacas does all the heavy lifting; Ryacas is just ... | Symbolic computation in R? | Yes. There is the Ryacas package which is hosted on Google Code here. Ryacas has recently been expanded/converted to the rMathpiper package which is hosted here. I have used Ryacas and it is straig | Symbolic computation in R?
Yes. There is the Ryacas package which is hosted on Google Code here. Ryacas has recently been expanded/converted to the rMathpiper package which is hosted here. I have used Ryacas and it is straightforward, but you will need to install Yacas in order for it to work (Yacas does all the hea... | Symbolic computation in R?
Yes. There is the Ryacas package which is hosted on Google Code here. Ryacas has recently been expanded/converted to the rMathpiper package which is hosted here. I have used Ryacas and it is straig |
10,271 | Symbolic computation in R? | Some things are also in base R --- see help(deriv) or help(D).
A simple example from that help page:
R> trig.exp <- expression(sin(cos(x + y^2)))
R> ( D.sc <- D(trig.exp, "x") )
-(cos(cos(x + y^2)) * sin(x + y^2))
R> all.equal(D(trig.exp[[1]], "x"), D.sc)
[1] TRUE
R> | Symbolic computation in R? | Some things are also in base R --- see help(deriv) or help(D).
A simple example from that help page:
R> trig.exp <- expression(sin(cos(x + y^2)))
R> ( D.sc <- D(trig.exp, "x") )
-(cos(cos(x + y^2)) | Symbolic computation in R?
Some things are also in base R --- see help(deriv) or help(D).
A simple example from that help page:
R> trig.exp <- expression(sin(cos(x + y^2)))
R> ( D.sc <- D(trig.exp, "x") )
-(cos(cos(x + y^2)) * sin(x + y^2))
R> all.equal(D(trig.exp[[1]], "x"), D.sc)
[1] TRUE
R> | Symbolic computation in R?
Some things are also in base R --- see help(deriv) or help(D).
A simple example from that help page:
R> trig.exp <- expression(sin(cos(x + y^2)))
R> ( D.sc <- D(trig.exp, "x") )
-(cos(cos(x + y^2)) |
10,272 | Symbolic computation in R? | It makes more sense to use a "real" CAS like Maxima. | Symbolic computation in R? | It makes more sense to use a "real" CAS like Maxima. | Symbolic computation in R?
It makes more sense to use a "real" CAS like Maxima. | Symbolic computation in R?
It makes more sense to use a "real" CAS like Maxima. |
10,273 | Is there any intuitive explanation of why logistic regression will not work for perfect separation case? And why adding regularization will fix it? | A 2D demo with toy data will be used to explain what was happening for perfect separation on logistic regression with and without regularization. The experiments started with an overlapping data set and we gradually move two classes apart. The objective function contour and optima (logistic loss) will be shown in the r... | Is there any intuitive explanation of why logistic regression will not work for perfect separation c | A 2D demo with toy data will be used to explain what was happening for perfect separation on logistic regression with and without regularization. The experiments started with an overlapping data set a | Is there any intuitive explanation of why logistic regression will not work for perfect separation case? And why adding regularization will fix it?
A 2D demo with toy data will be used to explain what was happening for perfect separation on logistic regression with and without regularization. The experiments started wi... | Is there any intuitive explanation of why logistic regression will not work for perfect separation c
A 2D demo with toy data will be used to explain what was happening for perfect separation on logistic regression with and without regularization. The experiments started with an overlapping data set a |
10,274 | Setting knots in natural cubic splines in R | How to specify the knots in R
The ns function generates a natural regression spline basis given an input vector. The knots can be specified either via a degrees-of-freedom argument df which takes an integer or via a knots argument knots which takes a vector giving the desired placement of the knots. Note that in the co... | Setting knots in natural cubic splines in R | How to specify the knots in R
The ns function generates a natural regression spline basis given an input vector. The knots can be specified either via a degrees-of-freedom argument df which takes an i | Setting knots in natural cubic splines in R
How to specify the knots in R
The ns function generates a natural regression spline basis given an input vector. The knots can be specified either via a degrees-of-freedom argument df which takes an integer or via a knots argument knots which takes a vector giving the desired... | Setting knots in natural cubic splines in R
How to specify the knots in R
The ns function generates a natural regression spline basis given an input vector. The knots can be specified either via a degrees-of-freedom argument df which takes an i |
10,275 | Should I capitalise the "N" in "Normal Distribution" in British English? | For what it's worth, Wikipedia says this on the origin of the name:
Since its introduction, the normal distribution has been known by many different name... Gauss himself apparently coined the term with reference to the "normal equations" involved in its applications, with normal having its technical meaning of orthog... | Should I capitalise the "N" in "Normal Distribution" in British English? | For what it's worth, Wikipedia says this on the origin of the name:
Since its introduction, the normal distribution has been known by many different name... Gauss himself apparently coined the term w | Should I capitalise the "N" in "Normal Distribution" in British English?
For what it's worth, Wikipedia says this on the origin of the name:
Since its introduction, the normal distribution has been known by many different name... Gauss himself apparently coined the term with reference to the "normal equations" involve... | Should I capitalise the "N" in "Normal Distribution" in British English?
For what it's worth, Wikipedia says this on the origin of the name:
Since its introduction, the normal distribution has been known by many different name... Gauss himself apparently coined the term w |
10,276 | Should I capitalise the "N" in "Normal Distribution" in British English? | On one hand, "Normal" seems not to be an adjective, nor a feature of some distribution that it is more normal than any other (or more "beta", more "binomial"). "Normal" is a name of a distribution and can to be considered as a proper noun, and so be capitalized. As @Scortchi noticed in his comment, this is also a gener... | Should I capitalise the "N" in "Normal Distribution" in British English? | On one hand, "Normal" seems not to be an adjective, nor a feature of some distribution that it is more normal than any other (or more "beta", more "binomial"). "Normal" is a name of a distribution and | Should I capitalise the "N" in "Normal Distribution" in British English?
On one hand, "Normal" seems not to be an adjective, nor a feature of some distribution that it is more normal than any other (or more "beta", more "binomial"). "Normal" is a name of a distribution and can to be considered as a proper noun, and so ... | Should I capitalise the "N" in "Normal Distribution" in British English?
On one hand, "Normal" seems not to be an adjective, nor a feature of some distribution that it is more normal than any other (or more "beta", more "binomial"). "Normal" is a name of a distribution and |
10,277 | Measuring accuracy of a logistic regression-based model | A measure that is often used to validate logistic regression, is the AUC of the ROC curve (plot of sensitivity against 1-specificity - just google for the terms if needed). This, in essence, evaluates the whole range of threshold values.
On the downside: evaluating the whole range of threshold values may be not what yo... | Measuring accuracy of a logistic regression-based model | A measure that is often used to validate logistic regression, is the AUC of the ROC curve (plot of sensitivity against 1-specificity - just google for the terms if needed). This, in essence, evaluates | Measuring accuracy of a logistic regression-based model
A measure that is often used to validate logistic regression, is the AUC of the ROC curve (plot of sensitivity against 1-specificity - just google for the terms if needed). This, in essence, evaluates the whole range of threshold values.
On the downside: evaluatin... | Measuring accuracy of a logistic regression-based model
A measure that is often used to validate logistic regression, is the AUC of the ROC curve (plot of sensitivity against 1-specificity - just google for the terms if needed). This, in essence, evaluates |
10,278 | Measuring accuracy of a logistic regression-based model | You are correct to worry about proportion classified correct as mainly reflecting the effect of an arbitrary boundary. I'd recommend two measures. One is the $c$-index or ROC area as others have described. This has an interpretation that is simpler than thinking about an ROC curve, and is a measure of pure predictiv... | Measuring accuracy of a logistic regression-based model | You are correct to worry about proportion classified correct as mainly reflecting the effect of an arbitrary boundary. I'd recommend two measures. One is the $c$-index or ROC area as others have des | Measuring accuracy of a logistic regression-based model
You are correct to worry about proportion classified correct as mainly reflecting the effect of an arbitrary boundary. I'd recommend two measures. One is the $c$-index or ROC area as others have described. This has an interpretation that is simpler than thinkin... | Measuring accuracy of a logistic regression-based model
You are correct to worry about proportion classified correct as mainly reflecting the effect of an arbitrary boundary. I'd recommend two measures. One is the $c$-index or ROC area as others have des |
10,279 | Measuring accuracy of a logistic regression-based model | If your data are grouped by $x$ values, you can compute the model predicted value and it's associated confidence interval, and see if the observed percentage falls within that range. For example, if you had 10 observations at $x=10$, 10 obs at $x=20$, 10 obs at $x=30$, etc., then mean(y[x==10]==1), mean(y[x==20]==1), ... | Measuring accuracy of a logistic regression-based model | If your data are grouped by $x$ values, you can compute the model predicted value and it's associated confidence interval, and see if the observed percentage falls within that range. For example, if | Measuring accuracy of a logistic regression-based model
If your data are grouped by $x$ values, you can compute the model predicted value and it's associated confidence interval, and see if the observed percentage falls within that range. For example, if you had 10 observations at $x=10$, 10 obs at $x=20$, 10 obs at $... | Measuring accuracy of a logistic regression-based model
If your data are grouped by $x$ values, you can compute the model predicted value and it's associated confidence interval, and see if the observed percentage falls within that range. For example, if |
10,280 | Measuring accuracy of a logistic regression-based model | I think you could establish a threshold (say 0.5), so when your probability is equals to or greater than that threshold your predicted class would be 1, and 0 otherwise. Then, you could obtain a measure of your accuracy in this way:
confusion_matrix <- ftable(actual_value, predicted_value)
accuracy <- sum(diag(confusio... | Measuring accuracy of a logistic regression-based model | I think you could establish a threshold (say 0.5), so when your probability is equals to or greater than that threshold your predicted class would be 1, and 0 otherwise. Then, you could obtain a measu | Measuring accuracy of a logistic regression-based model
I think you could establish a threshold (say 0.5), so when your probability is equals to or greater than that threshold your predicted class would be 1, and 0 otherwise. Then, you could obtain a measure of your accuracy in this way:
confusion_matrix <- ftable(actu... | Measuring accuracy of a logistic regression-based model
I think you could establish a threshold (say 0.5), so when your probability is equals to or greater than that threshold your predicted class would be 1, and 0 otherwise. Then, you could obtain a measu |
10,281 | Measuring accuracy of a logistic regression-based model | You may want to have look at my package softclassval (at softclassval.r-forge.r-project.org you find also two oral presentations I gave about the ideas behind the package).
I wrote it for a slightly different problem, namely if the reference (e.g. pathologist) "refuses" to give a clear class. However, you can use it wi... | Measuring accuracy of a logistic regression-based model | You may want to have look at my package softclassval (at softclassval.r-forge.r-project.org you find also two oral presentations I gave about the ideas behind the package).
I wrote it for a slightly d | Measuring accuracy of a logistic regression-based model
You may want to have look at my package softclassval (at softclassval.r-forge.r-project.org you find also two oral presentations I gave about the ideas behind the package).
I wrote it for a slightly different problem, namely if the reference (e.g. pathologist) "re... | Measuring accuracy of a logistic regression-based model
You may want to have look at my package softclassval (at softclassval.r-forge.r-project.org you find also two oral presentations I gave about the ideas behind the package).
I wrote it for a slightly d |
10,282 | Measuring accuracy of a logistic regression-based model | I wonder why you aren't using the bernoulli log-likelihood function. Basically, for every $0$ actual value, you score $-\log (1-\hat {p}) $. This measures how close to predicting $0$ your model is. Similarly, for every $1$ actual value you score $-\log (\hat {p}) $. This measures how close to predicting $1$ your model... | Measuring accuracy of a logistic regression-based model | I wonder why you aren't using the bernoulli log-likelihood function. Basically, for every $0$ actual value, you score $-\log (1-\hat {p}) $. This measures how close to predicting $0$ your model is. S | Measuring accuracy of a logistic regression-based model
I wonder why you aren't using the bernoulli log-likelihood function. Basically, for every $0$ actual value, you score $-\log (1-\hat {p}) $. This measures how close to predicting $0$ your model is. Similarly, for every $1$ actual value you score $-\log (\hat {p})... | Measuring accuracy of a logistic regression-based model
I wonder why you aren't using the bernoulli log-likelihood function. Basically, for every $0$ actual value, you score $-\log (1-\hat {p}) $. This measures how close to predicting $0$ your model is. S |
10,283 | Measuring accuracy of a logistic regression-based model | Here's my quick suggestion: Since your dependent variable is binary, you can assume it follows a Bernoulli distribution, with probability given by logistic regression $Pr_{i} = invlogit(a + bx_{i})$.
Now, set one simulation as follow:
$ y.rep[i] \sim Bernoulli (p[i])$
Then, run this simulation, say, 100 times. You wil... | Measuring accuracy of a logistic regression-based model | Here's my quick suggestion: Since your dependent variable is binary, you can assume it follows a Bernoulli distribution, with probability given by logistic regression $Pr_{i} = invlogit(a + bx_{i})$.
| Measuring accuracy of a logistic regression-based model
Here's my quick suggestion: Since your dependent variable is binary, you can assume it follows a Bernoulli distribution, with probability given by logistic regression $Pr_{i} = invlogit(a + bx_{i})$.
Now, set one simulation as follow:
$ y.rep[i] \sim Bernoulli (p[... | Measuring accuracy of a logistic regression-based model
Here's my quick suggestion: Since your dependent variable is binary, you can assume it follows a Bernoulli distribution, with probability given by logistic regression $Pr_{i} = invlogit(a + bx_{i})$.
|
10,284 | Measuring accuracy of a logistic regression-based model | There are many ways to estimate the accuracy of such predictions and the optimal choice really depends on what will the estimation implemented for.
For example, if you plan to select a few high score hits for an expensive follow-up study you may want to maximize the precision at high scores. On the other hand, if the... | Measuring accuracy of a logistic regression-based model | There are many ways to estimate the accuracy of such predictions and the optimal choice really depends on what will the estimation implemented for.
For example, if you plan to select a few high scor | Measuring accuracy of a logistic regression-based model
There are many ways to estimate the accuracy of such predictions and the optimal choice really depends on what will the estimation implemented for.
For example, if you plan to select a few high score hits for an expensive follow-up study you may want to maximize... | Measuring accuracy of a logistic regression-based model
There are many ways to estimate the accuracy of such predictions and the optimal choice really depends on what will the estimation implemented for.
For example, if you plan to select a few high scor |
10,285 | Measuring accuracy of a logistic regression-based model | You need to define what you mean by "accuracy". What you would like to know, please pardon me for putting words in your mouth, is how well your model fits the training data, and more importantly, how well this model "generalizes" to samples not in your training data. Although ROC curves can be useful in analyzing the... | Measuring accuracy of a logistic regression-based model | You need to define what you mean by "accuracy". What you would like to know, please pardon me for putting words in your mouth, is how well your model fits the training data, and more importantly, how | Measuring accuracy of a logistic regression-based model
You need to define what you mean by "accuracy". What you would like to know, please pardon me for putting words in your mouth, is how well your model fits the training data, and more importantly, how well this model "generalizes" to samples not in your training d... | Measuring accuracy of a logistic regression-based model
You need to define what you mean by "accuracy". What you would like to know, please pardon me for putting words in your mouth, is how well your model fits the training data, and more importantly, how |
10,286 | Application of wavelets to time-series-based anomaly detection algorithms | Wavelets are useful to detect singularities in a signal (see for example the paper here (see figure 3 for an illustration) and the references mentioned in this paper. I guess singularities can sometimes be an anomaly?
The idea here is that the Continuous wavelet transform (CWT) has maxima lines that propagates along ... | Application of wavelets to time-series-based anomaly detection algorithms | Wavelets are useful to detect singularities in a signal (see for example the paper here (see figure 3 for an illustration) and the references mentioned in this paper. I guess singularities can someti | Application of wavelets to time-series-based anomaly detection algorithms
Wavelets are useful to detect singularities in a signal (see for example the paper here (see figure 3 for an illustration) and the references mentioned in this paper. I guess singularities can sometimes be an anomaly?
The idea here is that the ... | Application of wavelets to time-series-based anomaly detection algorithms
Wavelets are useful to detect singularities in a signal (see for example the paper here (see figure 3 for an illustration) and the references mentioned in this paper. I guess singularities can someti |
10,287 | Application of wavelets to time-series-based anomaly detection algorithms | The list in the presentation that you reference seems fairly arbitrary to me, and the technique that would be used will really depend on the specific problem. You will note however that it also includes Kalman filters, so I suspect that the intended usage is as a filtering technique. Wavelet transforms generally fall... | Application of wavelets to time-series-based anomaly detection algorithms | The list in the presentation that you reference seems fairly arbitrary to me, and the technique that would be used will really depend on the specific problem. You will note however that it also inclu | Application of wavelets to time-series-based anomaly detection algorithms
The list in the presentation that you reference seems fairly arbitrary to me, and the technique that would be used will really depend on the specific problem. You will note however that it also includes Kalman filters, so I suspect that the inte... | Application of wavelets to time-series-based anomaly detection algorithms
The list in the presentation that you reference seems fairly arbitrary to me, and the technique that would be used will really depend on the specific problem. You will note however that it also inclu |
10,288 | Application of wavelets to time-series-based anomaly detection algorithms | Most commonly used and implemented discrete wavelet basis functions (as distinct from the CWT described in Robin's answer) have two nice properties that make them useful for anomaly detection:
They're compactly supported.
They act as band-pass filters with the pass-band determined by their support.
What this means in... | Application of wavelets to time-series-based anomaly detection algorithms | Most commonly used and implemented discrete wavelet basis functions (as distinct from the CWT described in Robin's answer) have two nice properties that make them useful for anomaly detection:
They'r | Application of wavelets to time-series-based anomaly detection algorithms
Most commonly used and implemented discrete wavelet basis functions (as distinct from the CWT described in Robin's answer) have two nice properties that make them useful for anomaly detection:
They're compactly supported.
They act as band-pass f... | Application of wavelets to time-series-based anomaly detection algorithms
Most commonly used and implemented discrete wavelet basis functions (as distinct from the CWT described in Robin's answer) have two nice properties that make them useful for anomaly detection:
They'r |
10,289 | Application of wavelets to time-series-based anomaly detection algorithms | Really nice answers so far!
For the mathematical proof regarding robin's first sentence, (i.e. Wavelets are useful to detect singularities in a signal), originally by Yves Meyer, see Singularity Detection and Processing with Wavelets (Mallat and Hwang, 1992). Not only they are useful, they are guaranteed to concentrate... | Application of wavelets to time-series-based anomaly detection algorithms | Really nice answers so far!
For the mathematical proof regarding robin's first sentence, (i.e. Wavelets are useful to detect singularities in a signal), originally by Yves Meyer, see Singularity Detec | Application of wavelets to time-series-based anomaly detection algorithms
Really nice answers so far!
For the mathematical proof regarding robin's first sentence, (i.e. Wavelets are useful to detect singularities in a signal), originally by Yves Meyer, see Singularity Detection and Processing with Wavelets (Mallat and ... | Application of wavelets to time-series-based anomaly detection algorithms
Really nice answers so far!
For the mathematical proof regarding robin's first sentence, (i.e. Wavelets are useful to detect singularities in a signal), originally by Yves Meyer, see Singularity Detec |
10,290 | Help me understand adjusted odds ratio in logistic regression | Odds are a way to express chances. Odds ratios are just that: one odds divided by another. That means an odds ratio is what you multiply one odds by to produce another. Let's see how they work in this common situation.
Converting between odds and probability
The odds of a binary response $Y$ are the ratio of the cha... | Help me understand adjusted odds ratio in logistic regression | Odds are a way to express chances. Odds ratios are just that: one odds divided by another. That means an odds ratio is what you multiply one odds by to produce another. Let's see how they work in t | Help me understand adjusted odds ratio in logistic regression
Odds are a way to express chances. Odds ratios are just that: one odds divided by another. That means an odds ratio is what you multiply one odds by to produce another. Let's see how they work in this common situation.
Converting between odds and probabil... | Help me understand adjusted odds ratio in logistic regression
Odds are a way to express chances. Odds ratios are just that: one odds divided by another. That means an odds ratio is what you multiply one odds by to produce another. Let's see how they work in t |
10,291 | Topic models and word co-occurrence methods | Recently, a huge body of literature discussing how to extract information from written text has grown. Hence I will just describe four milestones/popular models and their advantages/disadvantages and thus highlight (some of) the main differences (or at least what I think are the main/most important differences).
You me... | Topic models and word co-occurrence methods | Recently, a huge body of literature discussing how to extract information from written text has grown. Hence I will just describe four milestones/popular models and their advantages/disadvantages and | Topic models and word co-occurrence methods
Recently, a huge body of literature discussing how to extract information from written text has grown. Hence I will just describe four milestones/popular models and their advantages/disadvantages and thus highlight (some of) the main differences (or at least what I think are ... | Topic models and word co-occurrence methods
Recently, a huge body of literature discussing how to extract information from written text has grown. Hence I will just describe four milestones/popular models and their advantages/disadvantages and |
10,292 | Topic models and word co-occurrence methods | I might be 3 years late but I want to follow up your question on the example of "high-order of co-occurrences".
Basically, if term t1 co-occurs with term t2 that co-occurs with term t3, then term t1 is the 2nd-order co-occurrence with term t3. You can go to higher order if you want but at the end you control how simil... | Topic models and word co-occurrence methods | I might be 3 years late but I want to follow up your question on the example of "high-order of co-occurrences".
Basically, if term t1 co-occurs with term t2 that co-occurs with term t3, then term t1 | Topic models and word co-occurrence methods
I might be 3 years late but I want to follow up your question on the example of "high-order of co-occurrences".
Basically, if term t1 co-occurs with term t2 that co-occurs with term t3, then term t1 is the 2nd-order co-occurrence with term t3. You can go to higher order if y... | Topic models and word co-occurrence methods
I might be 3 years late but I want to follow up your question on the example of "high-order of co-occurrences".
Basically, if term t1 co-occurs with term t2 that co-occurs with term t3, then term t1 |
10,293 | Topic models and word co-occurrence methods | LDA can capture higher-order of co-occurrences of terms (due to the assumption of each topic is a multinomial distribution over terms), which is not possible by just computing PMI between terms. | Topic models and word co-occurrence methods | LDA can capture higher-order of co-occurrences of terms (due to the assumption of each topic is a multinomial distribution over terms), which is not possible by just computing PMI between terms. | Topic models and word co-occurrence methods
LDA can capture higher-order of co-occurrences of terms (due to the assumption of each topic is a multinomial distribution over terms), which is not possible by just computing PMI between terms. | Topic models and word co-occurrence methods
LDA can capture higher-order of co-occurrences of terms (due to the assumption of each topic is a multinomial distribution over terms), which is not possible by just computing PMI between terms. |
10,294 | What is a white noise process? | A white noise process is one with a mean zero and no correlation between its values at different times. See the 'white random process' section of Wikipedia's article on white noise. | What is a white noise process? | A white noise process is one with a mean zero and no correlation between its values at different times. See the 'white random process' section of Wikipedia's article on white noise. | What is a white noise process?
A white noise process is one with a mean zero and no correlation between its values at different times. See the 'white random process' section of Wikipedia's article on white noise. | What is a white noise process?
A white noise process is one with a mean zero and no correlation between its values at different times. See the 'white random process' section of Wikipedia's article on white noise. |
10,295 | What is a white noise process? | A white noise process is a random process of random variables that are uncorrelated, have mean zero, and a finite variance.
Formally, $X(t)$ is a white noise process if $$E(X(t)) = 0, E(X(t)^2) = S^2\text{, and } E(X(t)X(h)) = 0 \text{ for } t\neq h\text{.}$$
A slightly stronger condition is that they are independent ... | What is a white noise process? | A white noise process is a random process of random variables that are uncorrelated, have mean zero, and a finite variance.
Formally, $X(t)$ is a white noise process if $$E(X(t)) = 0, E(X(t)^2) = S^2 | What is a white noise process?
A white noise process is a random process of random variables that are uncorrelated, have mean zero, and a finite variance.
Formally, $X(t)$ is a white noise process if $$E(X(t)) = 0, E(X(t)^2) = S^2\text{, and } E(X(t)X(h)) = 0 \text{ for } t\neq h\text{.}$$
A slightly stronger conditio... | What is a white noise process?
A white noise process is a random process of random variables that are uncorrelated, have mean zero, and a finite variance.
Formally, $X(t)$ is a white noise process if $$E(X(t)) = 0, E(X(t)^2) = S^2 |
10,296 | What is a white noise process? | I myself usually think of white noise as an iid sequence with zero mean. At different times values of the process are then independent of each other, which is much stronger requirement than correlation zero. What is the best with this definition that it works in any context.
Side note. I only explained my intuition, t... | What is a white noise process? | I myself usually think of white noise as an iid sequence with zero mean. At different times values of the process are then independent of each other, which is much stronger requirement than correlatio | What is a white noise process?
I myself usually think of white noise as an iid sequence with zero mean. At different times values of the process are then independent of each other, which is much stronger requirement than correlation zero. What is the best with this definition that it works in any context.
Side note. I... | What is a white noise process?
I myself usually think of white noise as an iid sequence with zero mean. At different times values of the process are then independent of each other, which is much stronger requirement than correlatio |
10,297 | Generative vs discriminative models (in Bayesian context) | Both are used in supervised learning where you want to learn a rule that maps input x to output y, given a number of training examples of the form $\{(x_i,y_i)\}$. A generative model (e.g., naive Bayes) explicitly models the joint probability distribution $p(x,y)$ and then uses the Bayes rule to compute $p(y|x)$. On th... | Generative vs discriminative models (in Bayesian context) | Both are used in supervised learning where you want to learn a rule that maps input x to output y, given a number of training examples of the form $\{(x_i,y_i)\}$. A generative model (e.g., naive Baye | Generative vs discriminative models (in Bayesian context)
Both are used in supervised learning where you want to learn a rule that maps input x to output y, given a number of training examples of the form $\{(x_i,y_i)\}$. A generative model (e.g., naive Bayes) explicitly models the joint probability distribution $p(x,y... | Generative vs discriminative models (in Bayesian context)
Both are used in supervised learning where you want to learn a rule that maps input x to output y, given a number of training examples of the form $\{(x_i,y_i)\}$. A generative model (e.g., naive Baye |
10,298 | Generative vs discriminative models (in Bayesian context) | One addition to the above answer:
Since discriminant cares P(Y|X) only, while generative cares P(X,Y) and P(X) at the same time, in order to predict P(Y|X) well, the generative model has less degree of freedom in the model compared to discriminant model. So generative model is more robust, less prone to overfitting whi... | Generative vs discriminative models (in Bayesian context) | One addition to the above answer:
Since discriminant cares P(Y|X) only, while generative cares P(X,Y) and P(X) at the same time, in order to predict P(Y|X) well, the generative model has less degree o | Generative vs discriminative models (in Bayesian context)
One addition to the above answer:
Since discriminant cares P(Y|X) only, while generative cares P(X,Y) and P(X) at the same time, in order to predict P(Y|X) well, the generative model has less degree of freedom in the model compared to discriminant model. So gene... | Generative vs discriminative models (in Bayesian context)
One addition to the above answer:
Since discriminant cares P(Y|X) only, while generative cares P(X,Y) and P(X) at the same time, in order to predict P(Y|X) well, the generative model has less degree o |
10,299 | From a statistical perspective, can one infer causality using propensity scores with an observational study? | At the beginning of an article aiming at promoting the use of PSs in epidemiology, Oakes and Church (1) cited Hernán and Robins's claims about confounding effect in epidemiology (2):
Can you guarantee that the results
from your observational study are
unaffected by unmeasured confounding?
The only answer an epid... | From a statistical perspective, can one infer causality using propensity scores with an observationa | At the beginning of an article aiming at promoting the use of PSs in epidemiology, Oakes and Church (1) cited Hernán and Robins's claims about confounding effect in epidemiology (2):
Can you guarante | From a statistical perspective, can one infer causality using propensity scores with an observational study?
At the beginning of an article aiming at promoting the use of PSs in epidemiology, Oakes and Church (1) cited Hernán and Robins's claims about confounding effect in epidemiology (2):
Can you guarantee that the ... | From a statistical perspective, can one infer causality using propensity scores with an observationa
At the beginning of an article aiming at promoting the use of PSs in epidemiology, Oakes and Church (1) cited Hernán and Robins's claims about confounding effect in epidemiology (2):
Can you guarante |
10,300 | From a statistical perspective, can one infer causality using propensity scores with an observational study? | Propensity scores are typically used in the matching literature. Propensity scores use pre-treatment covariates to estimate the probability of receiving treatment. Essentially, a regression (either just regular OLS or logit, probit, etc) is used to calculate the propensity score with treatment as your outcome and pre-t... | From a statistical perspective, can one infer causality using propensity scores with an observationa | Propensity scores are typically used in the matching literature. Propensity scores use pre-treatment covariates to estimate the probability of receiving treatment. Essentially, a regression (either ju | From a statistical perspective, can one infer causality using propensity scores with an observational study?
Propensity scores are typically used in the matching literature. Propensity scores use pre-treatment covariates to estimate the probability of receiving treatment. Essentially, a regression (either just regular ... | From a statistical perspective, can one infer causality using propensity scores with an observationa
Propensity scores are typically used in the matching literature. Propensity scores use pre-treatment covariates to estimate the probability of receiving treatment. Essentially, a regression (either ju |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.