idx int64 1 56k | question stringlengths 15 155 | answer stringlengths 2 29.2k ⌀ | question_cut stringlengths 15 100 | answer_cut stringlengths 2 200 ⌀ | conversation stringlengths 47 29.3k | conversation_cut stringlengths 47 301 |
|---|---|---|---|---|---|---|
14,001 | Ideas for "lab notebook" software? | I found Xmind useful. You can attach anything, and tree structure is really useful for organizing. I especially like the feature where you can drill down into the node (topic). There are more similar software products which exploit the same concept. | Ideas for "lab notebook" software? | I found Xmind useful. You can attach anything, and tree structure is really useful for organizing. I especially like the feature where you can drill down into the node (topic). There are more similar | Ideas for "lab notebook" software?
I found Xmind useful. You can attach anything, and tree structure is really useful for organizing. I especially like the feature where you can drill down into the node (topic). There are more similar software products which exploit the same concept. | Ideas for "lab notebook" software?
I found Xmind useful. You can attach anything, and tree structure is really useful for organizing. I especially like the feature where you can drill down into the node (topic). There are more similar |
14,002 | Ideas for "lab notebook" software? | Personally I have found the Livescribe 'smartpen' a God send.
it merges the trusty 'old-world charm' of a traditional pen and paper notebook but inlcudes the ability to record sound (which it synchronises with your pen strokes) ready for later revision. NB- there is a downside and that is you have to buy special paper ... | Ideas for "lab notebook" software? | Personally I have found the Livescribe 'smartpen' a God send.
it merges the trusty 'old-world charm' of a traditional pen and paper notebook but inlcudes the ability to record sound (which it synchron | Ideas for "lab notebook" software?
Personally I have found the Livescribe 'smartpen' a God send.
it merges the trusty 'old-world charm' of a traditional pen and paper notebook but inlcudes the ability to record sound (which it synchronises with your pen strokes) ready for later revision. NB- there is a downside and tha... | Ideas for "lab notebook" software?
Personally I have found the Livescribe 'smartpen' a God send.
it merges the trusty 'old-world charm' of a traditional pen and paper notebook but inlcudes the ability to record sound (which it synchron |
14,003 | Ideas for "lab notebook" software? | You might want to check out the latest Zotero beta, which is now standalone and doesn't require Firefox. | Ideas for "lab notebook" software? | You might want to check out the latest Zotero beta, which is now standalone and doesn't require Firefox. | Ideas for "lab notebook" software?
You might want to check out the latest Zotero beta, which is now standalone and doesn't require Firefox. | Ideas for "lab notebook" software?
You might want to check out the latest Zotero beta, which is now standalone and doesn't require Firefox. |
14,004 | Ideas for "lab notebook" software? | Check out this article, Beautifying Data in the Real World, from Nature Precedings for some ideas. | Ideas for "lab notebook" software? | Check out this article, Beautifying Data in the Real World, from Nature Precedings for some ideas. | Ideas for "lab notebook" software?
Check out this article, Beautifying Data in the Real World, from Nature Precedings for some ideas. | Ideas for "lab notebook" software?
Check out this article, Beautifying Data in the Real World, from Nature Precedings for some ideas. |
14,005 | Ideas for "lab notebook" software? | How about a Boogie Board? You can write your notes on a slate and record them as 's the same idea as the LiveScribe pen but you can save them as PDF files...It isn't out yet, but will be in less than a month. | Ideas for "lab notebook" software? | How about a Boogie Board? You can write your notes on a slate and record them as 's the same idea as the LiveScribe pen but you can save them as PDF files...It isn't out yet, but will be in less than | Ideas for "lab notebook" software?
How about a Boogie Board? You can write your notes on a slate and record them as 's the same idea as the LiveScribe pen but you can save them as PDF files...It isn't out yet, but will be in less than a month. | Ideas for "lab notebook" software?
How about a Boogie Board? You can write your notes on a slate and record them as 's the same idea as the LiveScribe pen but you can save them as PDF files...It isn't out yet, but will be in less than |
14,006 | How to plot an ellipse from eigenvalues and eigenvectors in R? | You could extract the eigenvectors and -values via eigen(A). However, it's simpler to use the Cholesky decomposition. Note that when plotting confidence ellipses for data, the ellipse-axes are usually scaled to have length = square-root of the corresponding eigenvalues, and this is what the Cholesky decomposition gives... | How to plot an ellipse from eigenvalues and eigenvectors in R? | You could extract the eigenvectors and -values via eigen(A). However, it's simpler to use the Cholesky decomposition. Note that when plotting confidence ellipses for data, the ellipse-axes are usually | How to plot an ellipse from eigenvalues and eigenvectors in R?
You could extract the eigenvectors and -values via eigen(A). However, it's simpler to use the Cholesky decomposition. Note that when plotting confidence ellipses for data, the ellipse-axes are usually scaled to have length = square-root of the corresponding... | How to plot an ellipse from eigenvalues and eigenvectors in R?
You could extract the eigenvectors and -values via eigen(A). However, it's simpler to use the Cholesky decomposition. Note that when plotting confidence ellipses for data, the ellipse-axes are usually |
14,007 | How to plot an ellipse from eigenvalues and eigenvectors in R? | I think this is the R code that you want. I borrowed the R-code from this thread on the r-mailing list. The idea basically is: the major and minor half-diameters are the two eigen values and you rotate the ellipse by the amount of angle between the first eigen vector and the x-axis
mat <- matrix(c(2.2, 0.4, 0.4, 2.8)... | How to plot an ellipse from eigenvalues and eigenvectors in R? | I think this is the R code that you want. I borrowed the R-code from this thread on the r-mailing list. The idea basically is: the major and minor half-diameters are the two eigen values and you rotat | How to plot an ellipse from eigenvalues and eigenvectors in R?
I think this is the R code that you want. I borrowed the R-code from this thread on the r-mailing list. The idea basically is: the major and minor half-diameters are the two eigen values and you rotate the ellipse by the amount of angle between the first ei... | How to plot an ellipse from eigenvalues and eigenvectors in R?
I think this is the R code that you want. I borrowed the R-code from this thread on the r-mailing list. The idea basically is: the major and minor half-diameters are the two eigen values and you rotat |
14,008 | What is the community's take on the Fourth Quadrant? | I was at a meeting of the ASA (American Statistical Association) a couple years ago where Taleb talked about his "fourth quadrant" and it seemed his remarks were well received. Taleb was much more careful in his language when addressing an auditorium of statisticians than he has been in his popular writing.
Some sta... | What is the community's take on the Fourth Quadrant? | I was at a meeting of the ASA (American Statistical Association) a couple years ago where Taleb talked about his "fourth quadrant" and it seemed his remarks were well received. Taleb was much more ca | What is the community's take on the Fourth Quadrant?
I was at a meeting of the ASA (American Statistical Association) a couple years ago where Taleb talked about his "fourth quadrant" and it seemed his remarks were well received. Taleb was much more careful in his language when addressing an auditorium of statistician... | What is the community's take on the Fourth Quadrant?
I was at a meeting of the ASA (American Statistical Association) a couple years ago where Taleb talked about his "fourth quadrant" and it seemed his remarks were well received. Taleb was much more ca |
14,009 | Why use the logit link in beta regression? | Justification of the link function: A link function $g(\mu): (0,1) \rightarrow \mathbb{R}$ assures that all fitted values $\hat \mu = g^{-1}(x^\top \hat \beta)$ are always in $(0, 1)$. This may not matter that much in some applications, e.g., because the predictions or only evaluated in-sample or are not too close to 0... | Why use the logit link in beta regression? | Justification of the link function: A link function $g(\mu): (0,1) \rightarrow \mathbb{R}$ assures that all fitted values $\hat \mu = g^{-1}(x^\top \hat \beta)$ are always in $(0, 1)$. This may not ma | Why use the logit link in beta regression?
Justification of the link function: A link function $g(\mu): (0,1) \rightarrow \mathbb{R}$ assures that all fitted values $\hat \mu = g^{-1}(x^\top \hat \beta)$ are always in $(0, 1)$. This may not matter that much in some applications, e.g., because the predictions or only ev... | Why use the logit link in beta regression?
Justification of the link function: A link function $g(\mu): (0,1) \rightarrow \mathbb{R}$ assures that all fitted values $\hat \mu = g^{-1}(x^\top \hat \beta)$ are always in $(0, 1)$. This may not ma |
14,010 | Why use the logit link in beta regression? | It is incorrect that the logistic regression can only be used to model binary outcome data. The logistic regression model is appropriate for any data where 1) the expected value of outcome follows a logistic curve as a function of the predictors 2) the variance of the outcome is the expected outcome times one minus the... | Why use the logit link in beta regression? | It is incorrect that the logistic regression can only be used to model binary outcome data. The logistic regression model is appropriate for any data where 1) the expected value of outcome follows a l | Why use the logit link in beta regression?
It is incorrect that the logistic regression can only be used to model binary outcome data. The logistic regression model is appropriate for any data where 1) the expected value of outcome follows a logistic curve as a function of the predictors 2) the variance of the outcome ... | Why use the logit link in beta regression?
It is incorrect that the logistic regression can only be used to model binary outcome data. The logistic regression model is appropriate for any data where 1) the expected value of outcome follows a l |
14,011 | Singularity issues in Gaussian mixture model | If we want to fit a Gaussian to a single data point using maximum likelihood, we will get a very spiky Gaussian that "collapses" to that point. The variance is zero when there's only one point, which in the multi-variate Gaussian case, leads to a singular covariance matrix, so it's called the singularity problem.
When... | Singularity issues in Gaussian mixture model | If we want to fit a Gaussian to a single data point using maximum likelihood, we will get a very spiky Gaussian that "collapses" to that point. The variance is zero when there's only one point, which | Singularity issues in Gaussian mixture model
If we want to fit a Gaussian to a single data point using maximum likelihood, we will get a very spiky Gaussian that "collapses" to that point. The variance is zero when there's only one point, which in the multi-variate Gaussian case, leads to a singular covariance matrix, ... | Singularity issues in Gaussian mixture model
If we want to fit a Gaussian to a single data point using maximum likelihood, we will get a very spiky Gaussian that "collapses" to that point. The variance is zero when there's only one point, which |
14,012 | Singularity issues in Gaussian mixture model | This answer will give an insight into what is happening that leads to a singular covariance matrix during the fitting of an GMM to a dataset, why this is happening as well as what we can do to prevent that.
Therefore we best start by recapitulating the steps during the fitting of a Gaussian Mixture Model to a dataset.
... | Singularity issues in Gaussian mixture model | This answer will give an insight into what is happening that leads to a singular covariance matrix during the fitting of an GMM to a dataset, why this is happening as well as what we can do to prevent | Singularity issues in Gaussian mixture model
This answer will give an insight into what is happening that leads to a singular covariance matrix during the fitting of an GMM to a dataset, why this is happening as well as what we can do to prevent that.
Therefore we best start by recapitulating the steps during the fitti... | Singularity issues in Gaussian mixture model
This answer will give an insight into what is happening that leads to a singular covariance matrix during the fitting of an GMM to a dataset, why this is happening as well as what we can do to prevent |
14,013 | Singularity issues in Gaussian mixture model | Recall that this problem did not arise in the case of a single
Gaussian distribution. To understand the difference, note that if a
single Gaussian collapses onto a data point it will contribute
multiplicative factors to the likelihood function arising from the
other data points and these factors will go to zero... | Singularity issues in Gaussian mixture model | Recall that this problem did not arise in the case of a single
Gaussian distribution. To understand the difference, note that if a
single Gaussian collapses onto a data point it will contribute
| Singularity issues in Gaussian mixture model
Recall that this problem did not arise in the case of a single
Gaussian distribution. To understand the difference, note that if a
single Gaussian collapses onto a data point it will contribute
multiplicative factors to the likelihood function arising from the
other ... | Singularity issues in Gaussian mixture model
Recall that this problem did not arise in the case of a single
Gaussian distribution. To understand the difference, note that if a
single Gaussian collapses onto a data point it will contribute
|
14,014 | Singularity issues in Gaussian mixture model | Imho, all the answers miss a fundamental fact. If one looks at the parameter space for a Gaussian mixture model, this space is singular along the subspace where there are less than the full number of components in the mixture. That means that derivatives are automatically zero and typically the whole subspace will sh... | Singularity issues in Gaussian mixture model | Imho, all the answers miss a fundamental fact. If one looks at the parameter space for a Gaussian mixture model, this space is singular along the subspace where there are less than the full number of | Singularity issues in Gaussian mixture model
Imho, all the answers miss a fundamental fact. If one looks at the parameter space for a Gaussian mixture model, this space is singular along the subspace where there are less than the full number of components in the mixture. That means that derivatives are automatically ... | Singularity issues in Gaussian mixture model
Imho, all the answers miss a fundamental fact. If one looks at the parameter space for a Gaussian mixture model, this space is singular along the subspace where there are less than the full number of |
14,015 | Singularity issues in Gaussian mixture model | For a single Gaussian, the mean may possibly equal one of the data points ($x_n$ for example) and then there is the following term in the likelihood function:
\begin{equation}
{\cal N}(x_n|x_n,\sigma_j 1\!\!1)\rightarrow \lim_{\sigma_j\rightarrow x_n}\frac{1}{(2\pi)^{1/2}\sigma_j} \exp \left( -\frac{1}{\sigma_j}|x_n-\s... | Singularity issues in Gaussian mixture model | For a single Gaussian, the mean may possibly equal one of the data points ($x_n$ for example) and then there is the following term in the likelihood function:
\begin{equation}
{\cal N}(x_n|x_n,\sigma_ | Singularity issues in Gaussian mixture model
For a single Gaussian, the mean may possibly equal one of the data points ($x_n$ for example) and then there is the following term in the likelihood function:
\begin{equation}
{\cal N}(x_n|x_n,\sigma_j 1\!\!1)\rightarrow \lim_{\sigma_j\rightarrow x_n}\frac{1}{(2\pi)^{1/2}\si... | Singularity issues in Gaussian mixture model
For a single Gaussian, the mean may possibly equal one of the data points ($x_n$ for example) and then there is the following term in the likelihood function:
\begin{equation}
{\cal N}(x_n|x_n,\sigma_ |
14,016 | Log probability vs product of probabilities | I fear you have misunderstood what the article intends. This is no great surprise, since it's somewhat unclearly written. There are two different things going on.
The first is simply to work on the log scale.
That is, instead of "$p_{AB} = p_A\cdot p_B$" (when you have independence), one can instead write "$\log(p_{AB}... | Log probability vs product of probabilities | I fear you have misunderstood what the article intends. This is no great surprise, since it's somewhat unclearly written. There are two different things going on.
The first is simply to work on the lo | Log probability vs product of probabilities
I fear you have misunderstood what the article intends. This is no great surprise, since it's somewhat unclearly written. There are two different things going on.
The first is simply to work on the log scale.
That is, instead of "$p_{AB} = p_A\cdot p_B$" (when you have indepe... | Log probability vs product of probabilities
I fear you have misunderstood what the article intends. This is no great surprise, since it's somewhat unclearly written. There are two different things going on.
The first is simply to work on the lo |
14,017 | Why at all consider sampling without replacement in a practical application? | Expanding on the answer of @Scortchi . . .
Suppose the population had 5 members and you have budget to sample 5 individuals. You are interested in the population mean of a variable X, a characteristic of individuals in this population. You could do it your way, and randomly sample with replacement. The variance of ... | Why at all consider sampling without replacement in a practical application? | Expanding on the answer of @Scortchi . . .
Suppose the population had 5 members and you have budget to sample 5 individuals. You are interested in the population mean of a variable X, a characterist | Why at all consider sampling without replacement in a practical application?
Expanding on the answer of @Scortchi . . .
Suppose the population had 5 members and you have budget to sample 5 individuals. You are interested in the population mean of a variable X, a characteristic of individuals in this population. You ... | Why at all consider sampling without replacement in a practical application?
Expanding on the answer of @Scortchi . . .
Suppose the population had 5 members and you have budget to sample 5 individuals. You are interested in the population mean of a variable X, a characterist |
14,018 | Why at all consider sampling without replacement in a practical application? | The precision of estimates is usually higher for sampling without replacement comparing to sampling with replacement.
For example, it is possible to select only one element $n$ times when sampling is done with replacement in an extreme case. That could lead to very imprecise estimate of the population parameter of int... | Why at all consider sampling without replacement in a practical application? | The precision of estimates is usually higher for sampling without replacement comparing to sampling with replacement.
For example, it is possible to select only one element $n$ times when sampling is | Why at all consider sampling without replacement in a practical application?
The precision of estimates is usually higher for sampling without replacement comparing to sampling with replacement.
For example, it is possible to select only one element $n$ times when sampling is done with replacement in an extreme case. T... | Why at all consider sampling without replacement in a practical application?
The precision of estimates is usually higher for sampling without replacement comparing to sampling with replacement.
For example, it is possible to select only one element $n$ times when sampling is |
14,019 | Why at all consider sampling without replacement in a practical application? | I don't think the answers here are totally adequate, and they seem to argue for the limiting case in which your amount of data is very low.
With a sufficiently large sample, this isn't a worry at all, especially with many bootstrap resamples (~1000). If I have sampled from the true distribution a dataset of size 10,00... | Why at all consider sampling without replacement in a practical application? | I don't think the answers here are totally adequate, and they seem to argue for the limiting case in which your amount of data is very low.
With a sufficiently large sample, this isn't a worry at all | Why at all consider sampling without replacement in a practical application?
I don't think the answers here are totally adequate, and they seem to argue for the limiting case in which your amount of data is very low.
With a sufficiently large sample, this isn't a worry at all, especially with many bootstrap resamples ... | Why at all consider sampling without replacement in a practical application?
I don't think the answers here are totally adequate, and they seem to argue for the limiting case in which your amount of data is very low.
With a sufficiently large sample, this isn't a worry at all |
14,020 | Why at all consider sampling without replacement in a practical application? | But from a practical POV I don't see why one would consider sampling without replacement given the advantages of with replacement.
In practice, sampling without replacement saves you the need to, well, make replacements. This has two benefits:
You can just take a larger sample and consider it as multiple individual s... | Why at all consider sampling without replacement in a practical application? | But from a practical POV I don't see why one would consider sampling without replacement given the advantages of with replacement.
In practice, sampling without replacement saves you the need to, wel | Why at all consider sampling without replacement in a practical application?
But from a practical POV I don't see why one would consider sampling without replacement given the advantages of with replacement.
In practice, sampling without replacement saves you the need to, well, make replacements. This has two benefits... | Why at all consider sampling without replacement in a practical application?
But from a practical POV I don't see why one would consider sampling without replacement given the advantages of with replacement.
In practice, sampling without replacement saves you the need to, wel |
14,021 | Why at all consider sampling without replacement in a practical application? | I have a result which treats without replacement practically as with replacement and removes all the difficulties. Note that with replacement calculations are much easier. So, if a probability involves p and q,probabilities of success and failure, in with replacement case, the corresponding probability in without repla... | Why at all consider sampling without replacement in a practical application? | I have a result which treats without replacement practically as with replacement and removes all the difficulties. Note that with replacement calculations are much easier. So, if a probability involve | Why at all consider sampling without replacement in a practical application?
I have a result which treats without replacement practically as with replacement and removes all the difficulties. Note that with replacement calculations are much easier. So, if a probability involves p and q,probabilities of success and fail... | Why at all consider sampling without replacement in a practical application?
I have a result which treats without replacement practically as with replacement and removes all the difficulties. Note that with replacement calculations are much easier. So, if a probability involve |
14,022 | Can lmer() use splines as random effects? | If what you show works for a lmer formula for a random effects term then you should be able to use functions from the splines package that comes with R to set up the relevant basis functions.
require("lme4")
require("splines")
lmer(counts ~ dependent_variable + (bs(t) | ID), family="poisson")
Depending on what you wan... | Can lmer() use splines as random effects? | If what you show works for a lmer formula for a random effects term then you should be able to use functions from the splines package that comes with R to set up the relevant basis functions.
require( | Can lmer() use splines as random effects?
If what you show works for a lmer formula for a random effects term then you should be able to use functions from the splines package that comes with R to set up the relevant basis functions.
require("lme4")
require("splines")
lmer(counts ~ dependent_variable + (bs(t) | ID), fa... | Can lmer() use splines as random effects?
If what you show works for a lmer formula for a random effects term then you should be able to use functions from the splines package that comes with R to set up the relevant basis functions.
require( |
14,023 | How can I calculate the conditional probability of several events? | Another approach would be:
P(A| B, C, D) = P(A, B, C, D)/P(B, C, D)
= P(B| A, C, D).P(A, C, D)/P(B, C, D)
= P(B| A, C, D).P(C| A, D).P(A, D)/{P(C| B, D).P(B, D)}
= P(B| A, C, D).P(C| A, D).P(D| A).P(A)/{P(C| B, D).P(D| B).P(B)}
Note the similarity to:
P(A| B) = P(A, B)/P... | How can I calculate the conditional probability of several events? | Another approach would be:
P(A| B, C, D) = P(A, B, C, D)/P(B, C, D)
= P(B| A, C, D).P(A, C, D)/P(B, C, D)
= P(B| A, C, D).P(C| A, D).P(A, D)/{P(C| B, D).P(B, D)}
| How can I calculate the conditional probability of several events?
Another approach would be:
P(A| B, C, D) = P(A, B, C, D)/P(B, C, D)
= P(B| A, C, D).P(A, C, D)/P(B, C, D)
= P(B| A, C, D).P(C| A, D).P(A, D)/{P(C| B, D).P(B, D)}
= P(B| A, C, D).P(C| A, D).P(D| A).P(A)/{P(C| B, ... | How can I calculate the conditional probability of several events?
Another approach would be:
P(A| B, C, D) = P(A, B, C, D)/P(B, C, D)
= P(B| A, C, D).P(A, C, D)/P(B, C, D)
= P(B| A, C, D).P(C| A, D).P(A, D)/{P(C| B, D).P(B, D)}
|
14,024 | How can I calculate the conditional probability of several events? | Take the intersection of B,C and D call it U. Then perform P(A|U). | How can I calculate the conditional probability of several events? | Take the intersection of B,C and D call it U. Then perform P(A|U). | How can I calculate the conditional probability of several events?
Take the intersection of B,C and D call it U. Then perform P(A|U). | How can I calculate the conditional probability of several events?
Take the intersection of B,C and D call it U. Then perform P(A|U). |
14,025 | How can I calculate the conditional probability of several events? | check this wikipedia page under the sub-section named extensions, they do show how to derive conditional probability involving more than 2 events. | How can I calculate the conditional probability of several events? | check this wikipedia page under the sub-section named extensions, they do show how to derive conditional probability involving more than 2 events. | How can I calculate the conditional probability of several events?
check this wikipedia page under the sub-section named extensions, they do show how to derive conditional probability involving more than 2 events. | How can I calculate the conditional probability of several events?
check this wikipedia page under the sub-section named extensions, they do show how to derive conditional probability involving more than 2 events. |
14,026 | When can you use data-based criteria to specify a regression model? | Variable selection techniques, in general (whether stepwise, backward, forward, all subsets, AIC, etc.), capitalize on chance or random patterns in the sample data that do not exist in the population. The technical term for this is over-fitting and it is especially problematic with small datasets, though it is not exc... | When can you use data-based criteria to specify a regression model? | Variable selection techniques, in general (whether stepwise, backward, forward, all subsets, AIC, etc.), capitalize on chance or random patterns in the sample data that do not exist in the population. | When can you use data-based criteria to specify a regression model?
Variable selection techniques, in general (whether stepwise, backward, forward, all subsets, AIC, etc.), capitalize on chance or random patterns in the sample data that do not exist in the population. The technical term for this is over-fitting and it... | When can you use data-based criteria to specify a regression model?
Variable selection techniques, in general (whether stepwise, backward, forward, all subsets, AIC, etc.), capitalize on chance or random patterns in the sample data that do not exist in the population. |
14,027 | When can you use data-based criteria to specify a regression model? | In the social science context where I come from, the issue is whether you are interested in (a) prediction or (b) testing a focused research question.
If the purpose is prediction then data driven approaches are appropriate.
If the purpose is to examine a focused research question then it is important to consider which... | When can you use data-based criteria to specify a regression model? | In the social science context where I come from, the issue is whether you are interested in (a) prediction or (b) testing a focused research question.
If the purpose is prediction then data driven app | When can you use data-based criteria to specify a regression model?
In the social science context where I come from, the issue is whether you are interested in (a) prediction or (b) testing a focused research question.
If the purpose is prediction then data driven approaches are appropriate.
If the purpose is to examin... | When can you use data-based criteria to specify a regression model?
In the social science context where I come from, the issue is whether you are interested in (a) prediction or (b) testing a focused research question.
If the purpose is prediction then data driven app |
14,028 | When can you use data-based criteria to specify a regression model? | I don't think it is possible to do Bonferoni or similar corrections to adjust for variable selection in regression because all the tests and steps involved in model selection are not independent.
One approach is to formulate the model using one set of data, and do inference on a different set of data. This is done in f... | When can you use data-based criteria to specify a regression model? | I don't think it is possible to do Bonferoni or similar corrections to adjust for variable selection in regression because all the tests and steps involved in model selection are not independent.
One | When can you use data-based criteria to specify a regression model?
I don't think it is possible to do Bonferoni or similar corrections to adjust for variable selection in regression because all the tests and steps involved in model selection are not independent.
One approach is to formulate the model using one set of ... | When can you use data-based criteria to specify a regression model?
I don't think it is possible to do Bonferoni or similar corrections to adjust for variable selection in regression because all the tests and steps involved in model selection are not independent.
One |
14,029 | When can you use data-based criteria to specify a regression model? | Richard Berk has a recent article where he demonstrates through simulation the problems of such data snooping and statistical inference. As Rob suggested it is more problematic than simply correcting for multiple hypothesis tests.
Statistical Inference After Model Selection
by: Richard Berk, Lawrence Brown, Linda Zhao
... | When can you use data-based criteria to specify a regression model? | Richard Berk has a recent article where he demonstrates through simulation the problems of such data snooping and statistical inference. As Rob suggested it is more problematic than simply correcting | When can you use data-based criteria to specify a regression model?
Richard Berk has a recent article where he demonstrates through simulation the problems of such data snooping and statistical inference. As Rob suggested it is more problematic than simply correcting for multiple hypothesis tests.
Statistical Inference... | When can you use data-based criteria to specify a regression model?
Richard Berk has a recent article where he demonstrates through simulation the problems of such data snooping and statistical inference. As Rob suggested it is more problematic than simply correcting |
14,030 | When can you use data-based criteria to specify a regression model? | If I understand your question right, than the answer to your problem is to correct the p-values accordingly to the number of hypothesis.
For example Holm-Bonferoni corrections, where you sort the hypothesis (= your different models) by their p-value and reject those with a p samller than (desired p-value / index).
More... | When can you use data-based criteria to specify a regression model? | If I understand your question right, than the answer to your problem is to correct the p-values accordingly to the number of hypothesis.
For example Holm-Bonferoni corrections, where you sort the hypo | When can you use data-based criteria to specify a regression model?
If I understand your question right, than the answer to your problem is to correct the p-values accordingly to the number of hypothesis.
For example Holm-Bonferoni corrections, where you sort the hypothesis (= your different models) by their p-value an... | When can you use data-based criteria to specify a regression model?
If I understand your question right, than the answer to your problem is to correct the p-values accordingly to the number of hypothesis.
For example Holm-Bonferoni corrections, where you sort the hypo |
14,031 | Is a spline interpolation considered to be a nonparametric model? | This is a good question. Frequently, one will see smoothing regressions (e.g., splines, but also smoothing GAMs, running lines, LOWESS, etc.) described as nonparametric regression models.
These models are nonparametric in the sense that using them does not involve reported quantities like $\widehat{\beta}$, $\widehat{\... | Is a spline interpolation considered to be a nonparametric model? | This is a good question. Frequently, one will see smoothing regressions (e.g., splines, but also smoothing GAMs, running lines, LOWESS, etc.) described as nonparametric regression models.
These models | Is a spline interpolation considered to be a nonparametric model?
This is a good question. Frequently, one will see smoothing regressions (e.g., splines, but also smoothing GAMs, running lines, LOWESS, etc.) described as nonparametric regression models.
These models are nonparametric in the sense that using them does n... | Is a spline interpolation considered to be a nonparametric model?
This is a good question. Frequently, one will see smoothing regressions (e.g., splines, but also smoothing GAMs, running lines, LOWESS, etc.) described as nonparametric regression models.
These models |
14,032 | Is a spline interpolation considered to be a nonparametric model? | Strictly speaking, every model is parametric in the sense of having parameters. When we speak of a "nonparametric model", we really mean a model with the number of parameters being manageable.
The technical definition of "nonparametric" just says "infinite or unspecified", but in practice it means "infinite, or so larg... | Is a spline interpolation considered to be a nonparametric model? | Strictly speaking, every model is parametric in the sense of having parameters. When we speak of a "nonparametric model", we really mean a model with the number of parameters being manageable.
The tec | Is a spline interpolation considered to be a nonparametric model?
Strictly speaking, every model is parametric in the sense of having parameters. When we speak of a "nonparametric model", we really mean a model with the number of parameters being manageable.
The technical definition of "nonparametric" just says "infini... | Is a spline interpolation considered to be a nonparametric model?
Strictly speaking, every model is parametric in the sense of having parameters. When we speak of a "nonparametric model", we really mean a model with the number of parameters being manageable.
The tec |
14,033 | What is a surrogate loss function? | In the context of learning, say you have a classification problem with data set $\{(X_1, Y_1), \dots, (X_n, Y_n)\}$, where $X_n$ are your features and $Y_n$ are your true labels.
Given a hypothesis function $h(x)$, the loss function $l: (h(X_n), Y_n) \rightarrow \mathbb{R}$ takes the hypothesis function's prediction (i... | What is a surrogate loss function? | In the context of learning, say you have a classification problem with data set $\{(X_1, Y_1), \dots, (X_n, Y_n)\}$, where $X_n$ are your features and $Y_n$ are your true labels.
Given a hypothesis fu | What is a surrogate loss function?
In the context of learning, say you have a classification problem with data set $\{(X_1, Y_1), \dots, (X_n, Y_n)\}$, where $X_n$ are your features and $Y_n$ are your true labels.
Given a hypothesis function $h(x)$, the loss function $l: (h(X_n), Y_n) \rightarrow \mathbb{R}$ takes the ... | What is a surrogate loss function?
In the context of learning, say you have a classification problem with data set $\{(X_1, Y_1), \dots, (X_n, Y_n)\}$, where $X_n$ are your features and $Y_n$ are your true labels.
Given a hypothesis fu |
14,034 | What is a surrogate loss function? | On a very general note, this function is used to penalize the misclassifications.
In the end, your aim is to classify the data in correct classes and to evaluate your results. To train the model you develop the loss functions and most frequently Mean Squared Error. But in MSE the accuracy may not reflect the true accur... | What is a surrogate loss function? | On a very general note, this function is used to penalize the misclassifications.
In the end, your aim is to classify the data in correct classes and to evaluate your results. To train the model you d | What is a surrogate loss function?
On a very general note, this function is used to penalize the misclassifications.
In the end, your aim is to classify the data in correct classes and to evaluate your results. To train the model you develop the loss functions and most frequently Mean Squared Error. But in MSE the accu... | What is a surrogate loss function?
On a very general note, this function is used to penalize the misclassifications.
In the end, your aim is to classify the data in correct classes and to evaluate your results. To train the model you d |
14,035 | Doing correct statistics in a working environment? | In a nutshell, you're right and he's wrong. The tragedy of data analysis is that a lot of people do it, but only a minority of people do it well, partly due to a weak education in data analysis and partly due to apathy. Turn a critical eye to most any published research article that doesn't have a statistician or a mac... | Doing correct statistics in a working environment? | In a nutshell, you're right and he's wrong. The tragedy of data analysis is that a lot of people do it, but only a minority of people do it well, partly due to a weak education in data analysis and pa | Doing correct statistics in a working environment?
In a nutshell, you're right and he's wrong. The tragedy of data analysis is that a lot of people do it, but only a minority of people do it well, partly due to a weak education in data analysis and partly due to apathy. Turn a critical eye to most any published researc... | Doing correct statistics in a working environment?
In a nutshell, you're right and he's wrong. The tragedy of data analysis is that a lot of people do it, but only a minority of people do it well, partly due to a weak education in data analysis and pa |
14,036 | Doing correct statistics in a working environment? | Kodiologist is right - you're right, he's wrong. However sadly this is an even more common place problem than what you're encountering. You're actually in an industry that's doing relatively well.
For example, I currently work in a field where specifications on products need to be set. This is nearly always done by mon... | Doing correct statistics in a working environment? | Kodiologist is right - you're right, he's wrong. However sadly this is an even more common place problem than what you're encountering. You're actually in an industry that's doing relatively well.
For | Doing correct statistics in a working environment?
Kodiologist is right - you're right, he's wrong. However sadly this is an even more common place problem than what you're encountering. You're actually in an industry that's doing relatively well.
For example, I currently work in a field where specifications on product... | Doing correct statistics in a working environment?
Kodiologist is right - you're right, he's wrong. However sadly this is an even more common place problem than what you're encountering. You're actually in an industry that's doing relatively well.
For |
14,037 | Doing correct statistics in a working environment? | What is described appears like a somewhat bad experience. Nevertheless it should not be something that causes one to immediately question their own educational background nor the statistical judgement of their supervisor/manager.
Yes, very, very likely you are correct to suggest using CV instead of $R^2$ for model sele... | Doing correct statistics in a working environment? | What is described appears like a somewhat bad experience. Nevertheless it should not be something that causes one to immediately question their own educational background nor the statistical judgement | Doing correct statistics in a working environment?
What is described appears like a somewhat bad experience. Nevertheless it should not be something that causes one to immediately question their own educational background nor the statistical judgement of their supervisor/manager.
Yes, very, very likely you are correct ... | Doing correct statistics in a working environment?
What is described appears like a somewhat bad experience. Nevertheless it should not be something that causes one to immediately question their own educational background nor the statistical judgement |
14,038 | Linear regression and non-invertibility | What you really want to solve is $$X^T X\beta = X^TY.$$
This equation has a single solution if $X^TX$ is invertible(non-singular). If it's not, you have more solutions. You then need to analyze why, i.e. there will be something about $X$ which makes $X^TX$ singular. In mathematical terms, the columns of $X$ are linear... | Linear regression and non-invertibility | What you really want to solve is $$X^T X\beta = X^TY.$$
This equation has a single solution if $X^TX$ is invertible(non-singular). If it's not, you have more solutions. You then need to analyze why, | Linear regression and non-invertibility
What you really want to solve is $$X^T X\beta = X^TY.$$
This equation has a single solution if $X^TX$ is invertible(non-singular). If it's not, you have more solutions. You then need to analyze why, i.e. there will be something about $X$ which makes $X^TX$ singular. In mathemati... | Linear regression and non-invertibility
What you really want to solve is $$X^T X\beta = X^TY.$$
This equation has a single solution if $X^TX$ is invertible(non-singular). If it's not, you have more solutions. You then need to analyze why, |
14,039 | Linear regression and non-invertibility | We can obtain a solution to
$$\beta=(X^TX)^{-1}X^TY$$ even if $(X^TX)^{-1}$ is singular. However, the solution will not be unique. We can get around the problem of $(X^TX)^{-1}$ being singular by using generalized inverses to solve the problem.
A matrix $X^g$ is a generalized inverse of the matrix $A$ if and only i... | Linear regression and non-invertibility | We can obtain a solution to
$$\beta=(X^TX)^{-1}X^TY$$ even if $(X^TX)^{-1}$ is singular. However, the solution will not be unique. We can get around the problem of $(X^TX)^{-1}$ being singular by u | Linear regression and non-invertibility
We can obtain a solution to
$$\beta=(X^TX)^{-1}X^TY$$ even if $(X^TX)^{-1}$ is singular. However, the solution will not be unique. We can get around the problem of $(X^TX)^{-1}$ being singular by using generalized inverses to solve the problem.
A matrix $X^g$ is a generalized... | Linear regression and non-invertibility
We can obtain a solution to
$$\beta=(X^TX)^{-1}X^TY$$ even if $(X^TX)^{-1}$ is singular. However, the solution will not be unique. We can get around the problem of $(X^TX)^{-1}$ being singular by u |
14,040 | Linear regression and non-invertibility | Use the Moore-Penrose inverse! It's usually the "best" generalized inverse, in that it minimizes the sum of squared residuals (which is what you want if you assume gaussian noise in $Y$) and it is unique. It is what your gradient descent should converge to, but because the loss is quadratic it can also be solved direct... | Linear regression and non-invertibility | Use the Moore-Penrose inverse! It's usually the "best" generalized inverse, in that it minimizes the sum of squared residuals (which is what you want if you assume gaussian noise in $Y$) and it is uni | Linear regression and non-invertibility
Use the Moore-Penrose inverse! It's usually the "best" generalized inverse, in that it minimizes the sum of squared residuals (which is what you want if you assume gaussian noise in $Y$) and it is unique. It is what your gradient descent should converge to, but because the loss i... | Linear regression and non-invertibility
Use the Moore-Penrose inverse! It's usually the "best" generalized inverse, in that it minimizes the sum of squared residuals (which is what you want if you assume gaussian noise in $Y$) and it is uni |
14,041 | Interpretting LASSO variable trace plots | In both plots, each colored line represents the value taken by a different coefficient in your model. Lambda is the weight given to the regularization term (the L1 norm), so as lambda approaches zero, the loss function of your model approaches the OLS loss function. Here's one way you could specify the LASSO loss funct... | Interpretting LASSO variable trace plots | In both plots, each colored line represents the value taken by a different coefficient in your model. Lambda is the weight given to the regularization term (the L1 norm), so as lambda approaches zero, | Interpretting LASSO variable trace plots
In both plots, each colored line represents the value taken by a different coefficient in your model. Lambda is the weight given to the regularization term (the L1 norm), so as lambda approaches zero, the loss function of your model approaches the OLS loss function. Here's one w... | Interpretting LASSO variable trace plots
In both plots, each colored line represents the value taken by a different coefficient in your model. Lambda is the weight given to the regularization term (the L1 norm), so as lambda approaches zero, |
14,042 | Normalization prior to cross-validation | To answer your main question, it would be optimal and more appropiate to scale within the CV. But it will probably not matter much and might not be important in practice at all if your classifier rescales the data, which most do (at least in R).
However, selecting feature before cross validating is a BIG NO and will l... | Normalization prior to cross-validation | To answer your main question, it would be optimal and more appropiate to scale within the CV. But it will probably not matter much and might not be important in practice at all if your classifier resc | Normalization prior to cross-validation
To answer your main question, it would be optimal and more appropiate to scale within the CV. But it will probably not matter much and might not be important in practice at all if your classifier rescales the data, which most do (at least in R).
However, selecting feature before... | Normalization prior to cross-validation
To answer your main question, it would be optimal and more appropiate to scale within the CV. But it will probably not matter much and might not be important in practice at all if your classifier resc |
14,043 | Normalization prior to cross-validation | Cross-validation is best viewed as a method to estimate the performance of a statistical procedure, rather than a statistical model. Thus in order to get an unbiased performance estimate, you need to repeat every element of that procedure separately in each fold of the cross-validation, which would include normalisati... | Normalization prior to cross-validation | Cross-validation is best viewed as a method to estimate the performance of a statistical procedure, rather than a statistical model. Thus in order to get an unbiased performance estimate, you need to | Normalization prior to cross-validation
Cross-validation is best viewed as a method to estimate the performance of a statistical procedure, rather than a statistical model. Thus in order to get an unbiased performance estimate, you need to repeat every element of that procedure separately in each fold of the cross-val... | Normalization prior to cross-validation
Cross-validation is best viewed as a method to estimate the performance of a statistical procedure, rather than a statistical model. Thus in order to get an unbiased performance estimate, you need to |
14,044 | Normalization prior to cross-validation | I think that if the normalization only involves two parameters and you have a good size sample that will not be a problem. I would be more concerned about the transformation and the variable selection process. 10 fold cross-validation seems to be the rage today. Doesn't anybody use bootstrap 632 or 632+ for classifi... | Normalization prior to cross-validation | I think that if the normalization only involves two parameters and you have a good size sample that will not be a problem. I would be more concerned about the transformation and the variable selectio | Normalization prior to cross-validation
I think that if the normalization only involves two parameters and you have a good size sample that will not be a problem. I would be more concerned about the transformation and the variable selection process. 10 fold cross-validation seems to be the rage today. Doesn't anybod... | Normalization prior to cross-validation
I think that if the normalization only involves two parameters and you have a good size sample that will not be a problem. I would be more concerned about the transformation and the variable selectio |
14,045 | Normalization prior to cross-validation | I personally like the .632 method. Which is basically boostrapping with replacement. If you do that and remove duplicates you will get 632 entries out of an input set of 1000. Kind of neat. | Normalization prior to cross-validation | I personally like the .632 method. Which is basically boostrapping with replacement. If you do that and remove duplicates you will get 632 entries out of an input set of 1000. Kind of neat. | Normalization prior to cross-validation
I personally like the .632 method. Which is basically boostrapping with replacement. If you do that and remove duplicates you will get 632 entries out of an input set of 1000. Kind of neat. | Normalization prior to cross-validation
I personally like the .632 method. Which is basically boostrapping with replacement. If you do that and remove duplicates you will get 632 entries out of an input set of 1000. Kind of neat. |
14,046 | Applying the "kernel trick" to linear methods? | The kernel trick can only be applied to linear models where the examples in the problem formulation appear as dot products (Support Vector Machines, PCA, etc). | Applying the "kernel trick" to linear methods? | The kernel trick can only be applied to linear models where the examples in the problem formulation appear as dot products (Support Vector Machines, PCA, etc). | Applying the "kernel trick" to linear methods?
The kernel trick can only be applied to linear models where the examples in the problem formulation appear as dot products (Support Vector Machines, PCA, etc). | Applying the "kernel trick" to linear methods?
The kernel trick can only be applied to linear models where the examples in the problem formulation appear as dot products (Support Vector Machines, PCA, etc). |
14,047 | Applying the "kernel trick" to linear methods? | Two further references from B. Schölkopf:
Schölkopf, B. and Smola, A.J. (2002). Learning with kernels. The MIT Press.
Schölkopf, B., Tsuda, K., and Vert, J.-P. (2004). Kernel methods in computational biology. The MIT Press.
and a website dedicated to kernel machines. | Applying the "kernel trick" to linear methods? | Two further references from B. Schölkopf:
Schölkopf, B. and Smola, A.J. (2002). Learning with kernels. The MIT Press.
Schölkopf, B., Tsuda, K., and Vert, J.-P. (2004). Kernel methods in computational | Applying the "kernel trick" to linear methods?
Two further references from B. Schölkopf:
Schölkopf, B. and Smola, A.J. (2002). Learning with kernels. The MIT Press.
Schölkopf, B., Tsuda, K., and Vert, J.-P. (2004). Kernel methods in computational biology. The MIT Press.
and a website dedicated to kernel machines. | Applying the "kernel trick" to linear methods?
Two further references from B. Schölkopf:
Schölkopf, B. and Smola, A.J. (2002). Learning with kernels. The MIT Press.
Schölkopf, B., Tsuda, K., and Vert, J.-P. (2004). Kernel methods in computational |
14,048 | Applying the "kernel trick" to linear methods? | @ebony1 gives the key point (+1), I was a co-author of a paper discussing how to kernelize generalised linear models, e.g. logistic regression and Poisson regression, it is pretty straightforward.
G. C. Cawley, G. J. Janacek and N. L. C. Talbot, Generalised kernel machines, in Proceedings of the IEEE/INNS International... | Applying the "kernel trick" to linear methods? | @ebony1 gives the key point (+1), I was a co-author of a paper discussing how to kernelize generalised linear models, e.g. logistic regression and Poisson regression, it is pretty straightforward.
G. | Applying the "kernel trick" to linear methods?
@ebony1 gives the key point (+1), I was a co-author of a paper discussing how to kernelize generalised linear models, e.g. logistic regression and Poisson regression, it is pretty straightforward.
G. C. Cawley, G. J. Janacek and N. L. C. Talbot, Generalised kernel machines... | Applying the "kernel trick" to linear methods?
@ebony1 gives the key point (+1), I was a co-author of a paper discussing how to kernelize generalised linear models, e.g. logistic regression and Poisson regression, it is pretty straightforward.
G. |
14,049 | How is MANOVA related to LDA? | In a nutshell
Both one-way MANOVA and LDA start with decomposing the total scatter matrix $\mathbf T$ into the within-class scatter matrix $\mathbf W$ and between-class scatter matrix $\mathbf B$, such that $\mathbf T = \mathbf W + \mathbf B$. Note that this is fully analogous to how one-way ANOVA decomposes total sum-... | How is MANOVA related to LDA? | In a nutshell
Both one-way MANOVA and LDA start with decomposing the total scatter matrix $\mathbf T$ into the within-class scatter matrix $\mathbf W$ and between-class scatter matrix $\mathbf B$, suc | How is MANOVA related to LDA?
In a nutshell
Both one-way MANOVA and LDA start with decomposing the total scatter matrix $\mathbf T$ into the within-class scatter matrix $\mathbf W$ and between-class scatter matrix $\mathbf B$, such that $\mathbf T = \mathbf W + \mathbf B$. Note that this is fully analogous to how one-w... | How is MANOVA related to LDA?
In a nutshell
Both one-way MANOVA and LDA start with decomposing the total scatter matrix $\mathbf T$ into the within-class scatter matrix $\mathbf W$ and between-class scatter matrix $\mathbf B$, suc |
14,050 | How to show that an estimator is consistent? | EDIT: Fixed minor mistakes.
Here's one way to do it:
An estimator of $\theta$ (let's call it $T_n$) is consistent if it converges in probability to $\theta$. Using your notation
$\mathrm{plim}_{n\rightarrow\infty}T_n = \theta $.
Convergence in probability, mathematically, means
$\lim\limits_{n\rightarrow\infty} P(|... | How to show that an estimator is consistent? | EDIT: Fixed minor mistakes.
Here's one way to do it:
An estimator of $\theta$ (let's call it $T_n$) is consistent if it converges in probability to $\theta$. Using your notation
$\mathrm{plim}_{n\ | How to show that an estimator is consistent?
EDIT: Fixed minor mistakes.
Here's one way to do it:
An estimator of $\theta$ (let's call it $T_n$) is consistent if it converges in probability to $\theta$. Using your notation
$\mathrm{plim}_{n\rightarrow\infty}T_n = \theta $.
Convergence in probability, mathematically... | How to show that an estimator is consistent?
EDIT: Fixed minor mistakes.
Here's one way to do it:
An estimator of $\theta$ (let's call it $T_n$) is consistent if it converges in probability to $\theta$. Using your notation
$\mathrm{plim}_{n\ |
14,051 | Does the rejection of the null hypothesis have anything to do with Popper's theory of falsification? | I was also going to point to Deborah Mayo's work as linked in a comment. She is a Popper influenced philosopher who has written a lot about statistical testing.
I'll try to address the questions.
(1a) Popper didn't think of statistical testing as formalising his approach at all. Mayo states that this is because Popper ... | Does the rejection of the null hypothesis have anything to do with Popper's theory of falsification? | I was also going to point to Deborah Mayo's work as linked in a comment. She is a Popper influenced philosopher who has written a lot about statistical testing.
I'll try to address the questions.
(1a) | Does the rejection of the null hypothesis have anything to do with Popper's theory of falsification?
I was also going to point to Deborah Mayo's work as linked in a comment. She is a Popper influenced philosopher who has written a lot about statistical testing.
I'll try to address the questions.
(1a) Popper didn't thin... | Does the rejection of the null hypothesis have anything to do with Popper's theory of falsification?
I was also going to point to Deborah Mayo's work as linked in a comment. She is a Popper influenced philosopher who has written a lot about statistical testing.
I'll try to address the questions.
(1a) |
14,052 | Does the rejection of the null hypothesis have anything to do with Popper's theory of falsification? | Too long for comments, so here are my thoughts.
Null Hypothesis Statistical Testing (NHST) is only Popperian in the sense that no amount of corroboration proves a hypothesis correct, so often the best you can do is to find out what you can reasonably reject and continue on with hypotheses that have survived the tests t... | Does the rejection of the null hypothesis have anything to do with Popper's theory of falsification? | Too long for comments, so here are my thoughts.
Null Hypothesis Statistical Testing (NHST) is only Popperian in the sense that no amount of corroboration proves a hypothesis correct, so often the best | Does the rejection of the null hypothesis have anything to do with Popper's theory of falsification?
Too long for comments, so here are my thoughts.
Null Hypothesis Statistical Testing (NHST) is only Popperian in the sense that no amount of corroboration proves a hypothesis correct, so often the best you can do is to f... | Does the rejection of the null hypothesis have anything to do with Popper's theory of falsification?
Too long for comments, so here are my thoughts.
Null Hypothesis Statistical Testing (NHST) is only Popperian in the sense that no amount of corroboration proves a hypothesis correct, so often the best |
14,053 | Does the rejection of the null hypothesis have anything to do with Popper's theory of falsification? | Second-hand from Richard McElreath, but I think no. Popper's famous falsification theory was about falsifying experimental hypotheses not null hypotheses. | Does the rejection of the null hypothesis have anything to do with Popper's theory of falsification? | Second-hand from Richard McElreath, but I think no. Popper's famous falsification theory was about falsifying experimental hypotheses not null hypotheses. | Does the rejection of the null hypothesis have anything to do with Popper's theory of falsification?
Second-hand from Richard McElreath, but I think no. Popper's famous falsification theory was about falsifying experimental hypotheses not null hypotheses. | Does the rejection of the null hypothesis have anything to do with Popper's theory of falsification?
Second-hand from Richard McElreath, but I think no. Popper's famous falsification theory was about falsifying experimental hypotheses not null hypotheses. |
14,054 | Can you give a simple intuitive explanation of IRLS method to find the MLE of a GLM? | Some years ago I wrote a paper about this for my students (in spanish), so I can try to rewrite those explanations here. I will look at IRLS (iteratively reweighted least squares) through a series of examples of increasing complexity. For the first example we need the concept of a location-scale family. Let $f_0$ be... | Can you give a simple intuitive explanation of IRLS method to find the MLE of a GLM? | Some years ago I wrote a paper about this for my students (in spanish), so I can try to rewrite those explanations here. I will look at IRLS (iteratively reweighted least squares) through a series of | Can you give a simple intuitive explanation of IRLS method to find the MLE of a GLM?
Some years ago I wrote a paper about this for my students (in spanish), so I can try to rewrite those explanations here. I will look at IRLS (iteratively reweighted least squares) through a series of examples of increasing complexity.... | Can you give a simple intuitive explanation of IRLS method to find the MLE of a GLM?
Some years ago I wrote a paper about this for my students (in spanish), so I can try to rewrite those explanations here. I will look at IRLS (iteratively reweighted least squares) through a series of |
14,055 | what makes neural networks a nonlinear classification model? | I think you forget the activation function in nodes in neural network, which is non-linear and will make the whole model non-linear.
In your formula is not totally correct, where,
$$
h_1 \neq w_1x_1+w_2x_2
$$
but
$$
h_1 = \text{sigmoid}(w_1x_1+w_2x_2)
$$
where sigmoid function like this, $\text{sigmoid}(x)=\frac 1 {1... | what makes neural networks a nonlinear classification model? | I think you forget the activation function in nodes in neural network, which is non-linear and will make the whole model non-linear.
In your formula is not totally correct, where,
$$
h_1 \neq w_1x_1+ | what makes neural networks a nonlinear classification model?
I think you forget the activation function in nodes in neural network, which is non-linear and will make the whole model non-linear.
In your formula is not totally correct, where,
$$
h_1 \neq w_1x_1+w_2x_2
$$
but
$$
h_1 = \text{sigmoid}(w_1x_1+w_2x_2)
$$
wh... | what makes neural networks a nonlinear classification model?
I think you forget the activation function in nodes in neural network, which is non-linear and will make the whole model non-linear.
In your formula is not totally correct, where,
$$
h_1 \neq w_1x_1+ |
14,056 | what makes neural networks a nonlinear classification model? | You're correct that multiple linear layers can be equivalent to a single linear layer. As the other answers have said, a nonlinear activation function allows nonlinear classification. Saying that a classifier is nonlinear means that it has a nonlinear decision boundary. The decision boundary is a surface that separates... | what makes neural networks a nonlinear classification model? | You're correct that multiple linear layers can be equivalent to a single linear layer. As the other answers have said, a nonlinear activation function allows nonlinear classification. Saying that a cl | what makes neural networks a nonlinear classification model?
You're correct that multiple linear layers can be equivalent to a single linear layer. As the other answers have said, a nonlinear activation function allows nonlinear classification. Saying that a classifier is nonlinear means that it has a nonlinear decisio... | what makes neural networks a nonlinear classification model?
You're correct that multiple linear layers can be equivalent to a single linear layer. As the other answers have said, a nonlinear activation function allows nonlinear classification. Saying that a cl |
14,057 | what makes neural networks a nonlinear classification model? | The nonlinearity comes from the sigmoid activation function, 1/(1+e^x), where x is the linear combination of predictors and weights that you referenced in your question.
By the way, the bounds of this activation are zero and one because either the denominator gets so large that the fraction approaches zero, or e^x beco... | what makes neural networks a nonlinear classification model? | The nonlinearity comes from the sigmoid activation function, 1/(1+e^x), where x is the linear combination of predictors and weights that you referenced in your question.
By the way, the bounds of this | what makes neural networks a nonlinear classification model?
The nonlinearity comes from the sigmoid activation function, 1/(1+e^x), where x is the linear combination of predictors and weights that you referenced in your question.
By the way, the bounds of this activation are zero and one because either the denominator... | what makes neural networks a nonlinear classification model?
The nonlinearity comes from the sigmoid activation function, 1/(1+e^x), where x is the linear combination of predictors and weights that you referenced in your question.
By the way, the bounds of this |
14,058 | Why are principal components in PCA (eigenvectors of the covariance matrix) mutually orthogonal? [duplicate] | The covariance matrix is symmetric. If a matrix $A$ is symmetric, and has two eigenvectors $u$ and $v$, consider $Au = \lambda u$ and $Av = \mu v$.
Then by symmetry (and writing $'$ for transpose):
$$u'Av = u'A'v = (Au)'v = \lambda u'v$$
More directly:
$$u'Av = u'(\mu v) = \mu u'v$$
Since these are equal we obtain $(\l... | Why are principal components in PCA (eigenvectors of the covariance matrix) mutually orthogonal? [du | The covariance matrix is symmetric. If a matrix $A$ is symmetric, and has two eigenvectors $u$ and $v$, consider $Au = \lambda u$ and $Av = \mu v$.
Then by symmetry (and writing $'$ for transpose):
$$ | Why are principal components in PCA (eigenvectors of the covariance matrix) mutually orthogonal? [duplicate]
The covariance matrix is symmetric. If a matrix $A$ is symmetric, and has two eigenvectors $u$ and $v$, consider $Au = \lambda u$ and $Av = \mu v$.
Then by symmetry (and writing $'$ for transpose):
$$u'Av = u'A'... | Why are principal components in PCA (eigenvectors of the covariance matrix) mutually orthogonal? [du
The covariance matrix is symmetric. If a matrix $A$ is symmetric, and has two eigenvectors $u$ and $v$, consider $Au = \lambda u$ and $Av = \mu v$.
Then by symmetry (and writing $'$ for transpose):
$$ |
14,059 | Why are principal components in PCA (eigenvectors of the covariance matrix) mutually orthogonal? [duplicate] | I think it might help to pull back from the mathematics and think about the goal of PCA. In my mind, PCA is used to represent large-dimensional data sets (many variables) in the clearest way possible--i.e. the way that reveals as much of the underlying data structure as possible.
For an example, let's consider a data ... | Why are principal components in PCA (eigenvectors of the covariance matrix) mutually orthogonal? [du | I think it might help to pull back from the mathematics and think about the goal of PCA. In my mind, PCA is used to represent large-dimensional data sets (many variables) in the clearest way possible- | Why are principal components in PCA (eigenvectors of the covariance matrix) mutually orthogonal? [duplicate]
I think it might help to pull back from the mathematics and think about the goal of PCA. In my mind, PCA is used to represent large-dimensional data sets (many variables) in the clearest way possible--i.e. the w... | Why are principal components in PCA (eigenvectors of the covariance matrix) mutually orthogonal? [du
I think it might help to pull back from the mathematics and think about the goal of PCA. In my mind, PCA is used to represent large-dimensional data sets (many variables) in the clearest way possible- |
14,060 | Is it wrong to refer to results as being "highly significant"? | I think there is not much wrong in saying that the results are "highly significant" (even though yes, it is a bit sloppy).
It means that if you had set a much smaller significance level $\alpha$, you would still have judged the results as significant. Or, equivalently, if some of your readers have a much smaller $\alph... | Is it wrong to refer to results as being "highly significant"? | I think there is not much wrong in saying that the results are "highly significant" (even though yes, it is a bit sloppy).
It means that if you had set a much smaller significance level $\alpha$, you | Is it wrong to refer to results as being "highly significant"?
I think there is not much wrong in saying that the results are "highly significant" (even though yes, it is a bit sloppy).
It means that if you had set a much smaller significance level $\alpha$, you would still have judged the results as significant. Or, e... | Is it wrong to refer to results as being "highly significant"?
I think there is not much wrong in saying that the results are "highly significant" (even though yes, it is a bit sloppy).
It means that if you had set a much smaller significance level $\alpha$, you |
14,061 | Is it wrong to refer to results as being "highly significant"? | This is a common question.
A similar question may be "Why is p<=0.05 considered significant?" (http://www.jerrydallal.com/LHSP/p05.htm)
@Michael-Mayer gave one part of the answer: significance is only one part of the answer. With enough data, usually some parameters will show up as "significant" (look up Bonferroni co... | Is it wrong to refer to results as being "highly significant"? | This is a common question.
A similar question may be "Why is p<=0.05 considered significant?" (http://www.jerrydallal.com/LHSP/p05.htm)
@Michael-Mayer gave one part of the answer: significance is only | Is it wrong to refer to results as being "highly significant"?
This is a common question.
A similar question may be "Why is p<=0.05 considered significant?" (http://www.jerrydallal.com/LHSP/p05.htm)
@Michael-Mayer gave one part of the answer: significance is only one part of the answer. With enough data, usually some ... | Is it wrong to refer to results as being "highly significant"?
This is a common question.
A similar question may be "Why is p<=0.05 considered significant?" (http://www.jerrydallal.com/LHSP/p05.htm)
@Michael-Mayer gave one part of the answer: significance is only |
14,062 | Is it wrong to refer to results as being "highly significant"? | A test is a tool for a black-white decision, i.e. it tries to answer a yes/no question like 'is there a true treatment effect?'. Often, especially if the data set is large, such question is quite a waste of resources. Why asking a binary question if it is possible to get an answer to a quantitative question like 'how l... | Is it wrong to refer to results as being "highly significant"? | A test is a tool for a black-white decision, i.e. it tries to answer a yes/no question like 'is there a true treatment effect?'. Often, especially if the data set is large, such question is quite a wa | Is it wrong to refer to results as being "highly significant"?
A test is a tool for a black-white decision, i.e. it tries to answer a yes/no question like 'is there a true treatment effect?'. Often, especially if the data set is large, such question is quite a waste of resources. Why asking a binary question if it is p... | Is it wrong to refer to results as being "highly significant"?
A test is a tool for a black-white decision, i.e. it tries to answer a yes/no question like 'is there a true treatment effect?'. Often, especially if the data set is large, such question is quite a wa |
14,063 | What's wrong with this illustration of posterior distribution? | It looks like the prior and likelihood are normal, in which case the posterior should actually be narrower than either the likelihood or the prior. Notice that if
$$X \mid \mu \sim N(\mu, \sigma^2/n)$$ and
$$\mu \sim N(\mu_0, \tau^2),$$
then the posterior variance of $\mu \mid X$ is $$\dfrac{1}{n/\sigma^2 + 1/\tau^2}... | What's wrong with this illustration of posterior distribution? | It looks like the prior and likelihood are normal, in which case the posterior should actually be narrower than either the likelihood or the prior. Notice that if
$$X \mid \mu \sim N(\mu, \sigma^2/n | What's wrong with this illustration of posterior distribution?
It looks like the prior and likelihood are normal, in which case the posterior should actually be narrower than either the likelihood or the prior. Notice that if
$$X \mid \mu \sim N(\mu, \sigma^2/n)$$ and
$$\mu \sim N(\mu_0, \tau^2),$$
then the posterior... | What's wrong with this illustration of posterior distribution?
It looks like the prior and likelihood are normal, in which case the posterior should actually be narrower than either the likelihood or the prior. Notice that if
$$X \mid \mu \sim N(\mu, \sigma^2/n |
14,064 | Why is the probability zero for any given value of a normal distribution? | Perhaps the following thought-experiment helps you to understand better why the probability $Pr(X=a)$ is zero in a continuous distribution: Imagine that you have a wheel of fortune. Normally, the wheel is partitioned in several discrete sectors, perhaps 20 or so. If all sectors have the same area, you would have a prob... | Why is the probability zero for any given value of a normal distribution? | Perhaps the following thought-experiment helps you to understand better why the probability $Pr(X=a)$ is zero in a continuous distribution: Imagine that you have a wheel of fortune. Normally, the whee | Why is the probability zero for any given value of a normal distribution?
Perhaps the following thought-experiment helps you to understand better why the probability $Pr(X=a)$ is zero in a continuous distribution: Imagine that you have a wheel of fortune. Normally, the wheel is partitioned in several discrete sectors, ... | Why is the probability zero for any given value of a normal distribution?
Perhaps the following thought-experiment helps you to understand better why the probability $Pr(X=a)$ is zero in a continuous distribution: Imagine that you have a wheel of fortune. Normally, the whee |
14,065 | Why is the probability zero for any given value of a normal distribution? | "Probabilities of continuous random variables (X) are defined as the area under the curve of its PDF. Thus, only ranges of values can have a nonzero probability. The probability that a continuous random variable equals some value is always zero."
reference page:
http://support.minitab.com/en-us/minitab-express/1/help-... | Why is the probability zero for any given value of a normal distribution? | "Probabilities of continuous random variables (X) are defined as the area under the curve of its PDF. Thus, only ranges of values can have a nonzero probability. The probability that a continuous rand | Why is the probability zero for any given value of a normal distribution?
"Probabilities of continuous random variables (X) are defined as the area under the curve of its PDF. Thus, only ranges of values can have a nonzero probability. The probability that a continuous random variable equals some value is always zero."... | Why is the probability zero for any given value of a normal distribution?
"Probabilities of continuous random variables (X) are defined as the area under the curve of its PDF. Thus, only ranges of values can have a nonzero probability. The probability that a continuous rand |
14,066 | Framing the negative binomial distribution for DNA sequencing | IMOH, I really think that the negative binomial distribution is used for convenience.
So in RNA Seq there is a common assumption that if you take an infinite number of measurements of the same gene in an infinite number of replicates then the true distribution would be lognormal. This distribution is then sampled via ... | Framing the negative binomial distribution for DNA sequencing | IMOH, I really think that the negative binomial distribution is used for convenience.
So in RNA Seq there is a common assumption that if you take an infinite number of measurements of the same gene in | Framing the negative binomial distribution for DNA sequencing
IMOH, I really think that the negative binomial distribution is used for convenience.
So in RNA Seq there is a common assumption that if you take an infinite number of measurements of the same gene in an infinite number of replicates then the true distributi... | Framing the negative binomial distribution for DNA sequencing
IMOH, I really think that the negative binomial distribution is used for convenience.
So in RNA Seq there is a common assumption that if you take an infinite number of measurements of the same gene in |
14,067 | Framing the negative binomial distribution for DNA sequencing | I looked through a few web pages and couldn't find an explanation, but I came up with one for integer values of $r$. Suppose we have two radioactive sources independently generating alpha and beta particles at the rates $\alpha$ and $\beta$, respectively.
What is the distribution of the number of alpha particles befor... | Framing the negative binomial distribution for DNA sequencing | I looked through a few web pages and couldn't find an explanation, but I came up with one for integer values of $r$. Suppose we have two radioactive sources independently generating alpha and beta par | Framing the negative binomial distribution for DNA sequencing
I looked through a few web pages and couldn't find an explanation, but I came up with one for integer values of $r$. Suppose we have two radioactive sources independently generating alpha and beta particles at the rates $\alpha$ and $\beta$, respectively.
W... | Framing the negative binomial distribution for DNA sequencing
I looked through a few web pages and couldn't find an explanation, but I came up with one for integer values of $r$. Suppose we have two radioactive sources independently generating alpha and beta par |
14,068 | Framing the negative binomial distribution for DNA sequencing | Some explain it as something that works like the Poisson distribution but has an additional parameter, allowing more freedom to model the true distribution, with a variance not necessarily equal to the mean
Some explain it as a weighted mixture of Poisson distributions (with a gamma mixing distribution on the Poisson p... | Framing the negative binomial distribution for DNA sequencing | Some explain it as something that works like the Poisson distribution but has an additional parameter, allowing more freedom to model the true distribution, with a variance not necessarily equal to th | Framing the negative binomial distribution for DNA sequencing
Some explain it as something that works like the Poisson distribution but has an additional parameter, allowing more freedom to model the true distribution, with a variance not necessarily equal to the mean
Some explain it as a weighted mixture of Poisson di... | Framing the negative binomial distribution for DNA sequencing
Some explain it as something that works like the Poisson distribution but has an additional parameter, allowing more freedom to model the true distribution, with a variance not necessarily equal to th |
14,069 | Framing the negative binomial distribution for DNA sequencing | I can only offer intuition, but the gamma distribution itself describes (continuous) waiting times (how long does it take for a rare event to occur). So the fact that a gamma-distributed mixture of discrete poisson distributions would result in a discrete waiting time (trials until N failures) does not seem too surpris... | Framing the negative binomial distribution for DNA sequencing | I can only offer intuition, but the gamma distribution itself describes (continuous) waiting times (how long does it take for a rare event to occur). So the fact that a gamma-distributed mixture of di | Framing the negative binomial distribution for DNA sequencing
I can only offer intuition, but the gamma distribution itself describes (continuous) waiting times (how long does it take for a rare event to occur). So the fact that a gamma-distributed mixture of discrete poisson distributions would result in a discrete wa... | Framing the negative binomial distribution for DNA sequencing
I can only offer intuition, but the gamma distribution itself describes (continuous) waiting times (how long does it take for a rare event to occur). So the fact that a gamma-distributed mixture of di |
14,070 | Framing the negative binomial distribution for DNA sequencing | I'll try to give a simplistic mechanistic interpretation that I found useful when thinking about this.
Assume we have a perfect uniform coverage of the genome before library prep, and we observe $\mu$ reads covering a site on average.
Say that sequencing is a process that picks an original DNA fragment, puts it through... | Framing the negative binomial distribution for DNA sequencing | I'll try to give a simplistic mechanistic interpretation that I found useful when thinking about this.
Assume we have a perfect uniform coverage of the genome before library prep, and we observe $\mu$ | Framing the negative binomial distribution for DNA sequencing
I'll try to give a simplistic mechanistic interpretation that I found useful when thinking about this.
Assume we have a perfect uniform coverage of the genome before library prep, and we observe $\mu$ reads covering a site on average.
Say that sequencing is ... | Framing the negative binomial distribution for DNA sequencing
I'll try to give a simplistic mechanistic interpretation that I found useful when thinking about this.
Assume we have a perfect uniform coverage of the genome before library prep, and we observe $\mu$ |
14,071 | How to do estimation, when only summary statistics are available? | In this case, you can consider an ABC approximation of the likelihood (and consequently of the MLE) under the following assumption/restriction:
Assumption. The original sample size $n$ is known.
This is not a wild assumption given that the quality, in terms of convergence, of frequentist estimators depends on the sampl... | How to do estimation, when only summary statistics are available? | In this case, you can consider an ABC approximation of the likelihood (and consequently of the MLE) under the following assumption/restriction:
Assumption. The original sample size $n$ is known.
This | How to do estimation, when only summary statistics are available?
In this case, you can consider an ABC approximation of the likelihood (and consequently of the MLE) under the following assumption/restriction:
Assumption. The original sample size $n$ is known.
This is not a wild assumption given that the quality, in te... | How to do estimation, when only summary statistics are available?
In this case, you can consider an ABC approximation of the likelihood (and consequently of the MLE) under the following assumption/restriction:
Assumption. The original sample size $n$ is known.
This |
14,072 | How to do estimation, when only summary statistics are available? | It all depends on whether or not the joint distribution of those $T_i$'s is known. If it is, e.g.,
$$
(T_1,\ldots,T_k)\sim g(t_1,\ldots,t_k|\theta,n)
$$
then you can conduct maximum likelihood estimation based on this joint distribution. Note that, unless $(T_1,\ldots,T_k)$ is sufficient, this will almost always be a d... | How to do estimation, when only summary statistics are available? | It all depends on whether or not the joint distribution of those $T_i$'s is known. If it is, e.g.,
$$
(T_1,\ldots,T_k)\sim g(t_1,\ldots,t_k|\theta,n)
$$
then you can conduct maximum likelihood estimat | How to do estimation, when only summary statistics are available?
It all depends on whether or not the joint distribution of those $T_i$'s is known. If it is, e.g.,
$$
(T_1,\ldots,T_k)\sim g(t_1,\ldots,t_k|\theta,n)
$$
then you can conduct maximum likelihood estimation based on this joint distribution. Note that, unles... | How to do estimation, when only summary statistics are available?
It all depends on whether or not the joint distribution of those $T_i$'s is known. If it is, e.g.,
$$
(T_1,\ldots,T_k)\sim g(t_1,\ldots,t_k|\theta,n)
$$
then you can conduct maximum likelihood estimat |
14,073 | How to do estimation, when only summary statistics are available? | The (frequentist) maximum likelihood estimator is as follows:
For $F$ in the exponential family, and if your statistics are sufficient your likelihood to be maximised can always be written in the form:
$$
l(\theta| T) = \exp\left( -\psi(\theta) + \langle T,\phi(\theta) \rangle \right),
$$
where $\langle \cdot, \cdot\ra... | How to do estimation, when only summary statistics are available? | The (frequentist) maximum likelihood estimator is as follows:
For $F$ in the exponential family, and if your statistics are sufficient your likelihood to be maximised can always be written in the form | How to do estimation, when only summary statistics are available?
The (frequentist) maximum likelihood estimator is as follows:
For $F$ in the exponential family, and if your statistics are sufficient your likelihood to be maximised can always be written in the form:
$$
l(\theta| T) = \exp\left( -\psi(\theta) + \langle... | How to do estimation, when only summary statistics are available?
The (frequentist) maximum likelihood estimator is as follows:
For $F$ in the exponential family, and if your statistics are sufficient your likelihood to be maximised can always be written in the form |
14,074 | What is the definition of a symmetric distribution? | Briefly: $X$ is symmetric when $X$ and $2a-X$ have the same distribution for some real number $a$. But arriving at this in a fully justified manner requires some digression and generalizations, because it raises many implicit questions: why this definition of "symmetric"? Can there be other kinds of symmetries? What... | What is the definition of a symmetric distribution? | Briefly: $X$ is symmetric when $X$ and $2a-X$ have the same distribution for some real number $a$. But arriving at this in a fully justified manner requires some digression and generalizations, becau | What is the definition of a symmetric distribution?
Briefly: $X$ is symmetric when $X$ and $2a-X$ have the same distribution for some real number $a$. But arriving at this in a fully justified manner requires some digression and generalizations, because it raises many implicit questions: why this definition of "symmet... | What is the definition of a symmetric distribution?
Briefly: $X$ is symmetric when $X$ and $2a-X$ have the same distribution for some real number $a$. But arriving at this in a fully justified manner requires some digression and generalizations, becau |
14,075 | What is the definition of a symmetric distribution? | The answer will depend on what you mean by symmetry. In physics the notion of symmetry is fundamental and has become very general. Symmetry is any operation that leaves the system unchanged. In the case of a probability distribution this could be translated to any operation $X \to X'$ that returns the same probability ... | What is the definition of a symmetric distribution? | The answer will depend on what you mean by symmetry. In physics the notion of symmetry is fundamental and has become very general. Symmetry is any operation that leaves the system unchanged. In the ca | What is the definition of a symmetric distribution?
The answer will depend on what you mean by symmetry. In physics the notion of symmetry is fundamental and has become very general. Symmetry is any operation that leaves the system unchanged. In the case of a probability distribution this could be translated to any ope... | What is the definition of a symmetric distribution?
The answer will depend on what you mean by symmetry. In physics the notion of symmetry is fundamental and has become very general. Symmetry is any operation that leaves the system unchanged. In the ca |
14,076 | Real meaning of confidence ellipse | Actually, neither explanation is correct.
A confidence ellipse has to do with unobserved population parameters, like the true population mean of your bivariate distribution. A 95% confidence ellipse for this mean is really an algorithm with the following property: if you were to replicate your sampling from the underly... | Real meaning of confidence ellipse | Actually, neither explanation is correct.
A confidence ellipse has to do with unobserved population parameters, like the true population mean of your bivariate distribution. A 95% confidence ellipse f | Real meaning of confidence ellipse
Actually, neither explanation is correct.
A confidence ellipse has to do with unobserved population parameters, like the true population mean of your bivariate distribution. A 95% confidence ellipse for this mean is really an algorithm with the following property: if you were to repli... | Real meaning of confidence ellipse
Actually, neither explanation is correct.
A confidence ellipse has to do with unobserved population parameters, like the true population mean of your bivariate distribution. A 95% confidence ellipse f |
14,077 | Real meaning of confidence ellipse | It depends on the area this concept applies to. What was said above is true for statistics but when we apply stats to other subjects things are a bit different. In biomechanics, for example, we use the term confidence ellipse (though there is a debate whether it should be prediction ellipse) as a technique for measurin... | Real meaning of confidence ellipse | It depends on the area this concept applies to. What was said above is true for statistics but when we apply stats to other subjects things are a bit different. In biomechanics, for example, we use th | Real meaning of confidence ellipse
It depends on the area this concept applies to. What was said above is true for statistics but when we apply stats to other subjects things are a bit different. In biomechanics, for example, we use the term confidence ellipse (though there is a debate whether it should be prediction e... | Real meaning of confidence ellipse
It depends on the area this concept applies to. What was said above is true for statistics but when we apply stats to other subjects things are a bit different. In biomechanics, for example, we use th |
14,078 | Linear regression: any non-normal distribution giving identity of OLS and MLE? | In maximum likelihood estimation, we calculate
$$\hat \beta_{ML}: \sum \frac {\partial \ln f(\epsilon_i)}{\partial \beta} = \mathbf 0 \implies \sum \frac {f'(\epsilon_i)}{f(\epsilon_i)}\mathbf x_i = \mathbf 0$$
the last relation taking into account the linearity structure of the regression equation.
In comparison , th... | Linear regression: any non-normal distribution giving identity of OLS and MLE? | In maximum likelihood estimation, we calculate
$$\hat \beta_{ML}: \sum \frac {\partial \ln f(\epsilon_i)}{\partial \beta} = \mathbf 0 \implies \sum \frac {f'(\epsilon_i)}{f(\epsilon_i)}\mathbf x_i = | Linear regression: any non-normal distribution giving identity of OLS and MLE?
In maximum likelihood estimation, we calculate
$$\hat \beta_{ML}: \sum \frac {\partial \ln f(\epsilon_i)}{\partial \beta} = \mathbf 0 \implies \sum \frac {f'(\epsilon_i)}{f(\epsilon_i)}\mathbf x_i = \mathbf 0$$
the last relation taking into... | Linear regression: any non-normal distribution giving identity of OLS and MLE?
In maximum likelihood estimation, we calculate
$$\hat \beta_{ML}: \sum \frac {\partial \ln f(\epsilon_i)}{\partial \beta} = \mathbf 0 \implies \sum \frac {f'(\epsilon_i)}{f(\epsilon_i)}\mathbf x_i = |
14,079 | Linear regression: any non-normal distribution giving identity of OLS and MLE? | If we define the OLS as the solution to
$$\arg_{\beta_0,\beta_1}\min\sum_{i=1}^n (y_i-\beta_0-\beta_1x_i)^2$$
any density $f(y|x,\beta_0,\beta_1)$ such that
$$\arg_{\beta_0,\beta_1}\min\sum_{i=1}^n \log\{f(y_i|x_i,\beta_0,\beta_1)\}=\arg_{\beta_0,\beta_1}\min\sum_{i=1}^n (y_i-\beta_0-\beta_1x_i)^2$$is acceptable. This ... | Linear regression: any non-normal distribution giving identity of OLS and MLE? | If we define the OLS as the solution to
$$\arg_{\beta_0,\beta_1}\min\sum_{i=1}^n (y_i-\beta_0-\beta_1x_i)^2$$
any density $f(y|x,\beta_0,\beta_1)$ such that
$$\arg_{\beta_0,\beta_1}\min\sum_{i=1}^n \l | Linear regression: any non-normal distribution giving identity of OLS and MLE?
If we define the OLS as the solution to
$$\arg_{\beta_0,\beta_1}\min\sum_{i=1}^n (y_i-\beta_0-\beta_1x_i)^2$$
any density $f(y|x,\beta_0,\beta_1)$ such that
$$\arg_{\beta_0,\beta_1}\min\sum_{i=1}^n \log\{f(y_i|x_i,\beta_0,\beta_1)\}=\arg_{\b... | Linear regression: any non-normal distribution giving identity of OLS and MLE?
If we define the OLS as the solution to
$$\arg_{\beta_0,\beta_1}\min\sum_{i=1}^n (y_i-\beta_0-\beta_1x_i)^2$$
any density $f(y|x,\beta_0,\beta_1)$ such that
$$\arg_{\beta_0,\beta_1}\min\sum_{i=1}^n \l |
14,080 | Linear regression: any non-normal distribution giving identity of OLS and MLE? | I didn't know about this question until @Xi'an just updated with an answer. There is a more generic solution. Exponential family distributions with some parameters fixed yield to Bregman divergences. For such distributions mean is the minimizer. OLS minimizer is also the mean. Therefore for all such distributions they ... | Linear regression: any non-normal distribution giving identity of OLS and MLE? | I didn't know about this question until @Xi'an just updated with an answer. There is a more generic solution. Exponential family distributions with some parameters fixed yield to Bregman divergences. | Linear regression: any non-normal distribution giving identity of OLS and MLE?
I didn't know about this question until @Xi'an just updated with an answer. There is a more generic solution. Exponential family distributions with some parameters fixed yield to Bregman divergences. For such distributions mean is the minimi... | Linear regression: any non-normal distribution giving identity of OLS and MLE?
I didn't know about this question until @Xi'an just updated with an answer. There is a more generic solution. Exponential family distributions with some parameters fixed yield to Bregman divergences. |
14,081 | On Fisher's exact test: What test would have been appropriate if the lady hadn't known the number of milk-first cups? | Some would argue that even if the second margin is not fixed by design, it carries little information about the lady's ability to discriminate (i.e. it's approximately ancillary) & should be conditioned on. The exact unconditional test (first proposed by Barnard) is more complicated because you have to calculate the ma... | On Fisher's exact test: What test would have been appropriate if the lady hadn't known the number of | Some would argue that even if the second margin is not fixed by design, it carries little information about the lady's ability to discriminate (i.e. it's approximately ancillary) & should be condition | On Fisher's exact test: What test would have been appropriate if the lady hadn't known the number of milk-first cups?
Some would argue that even if the second margin is not fixed by design, it carries little information about the lady's ability to discriminate (i.e. it's approximately ancillary) & should be conditioned... | On Fisher's exact test: What test would have been appropriate if the lady hadn't known the number of
Some would argue that even if the second margin is not fixed by design, it carries little information about the lady's ability to discriminate (i.e. it's approximately ancillary) & should be condition |
14,082 | On Fisher's exact test: What test would have been appropriate if the lady hadn't known the number of milk-first cups? | Today, I read the first chapters of "The Design of Experiments" by RA Fisher, and one of the paragraph made me realize the fundamental flaw in my question.
That is, even if the lady can really tell the difference between milk-first and tea-first cups, I can never prove she has that ability "by any finite amount of
expe... | On Fisher's exact test: What test would have been appropriate if the lady hadn't known the number of | Today, I read the first chapters of "The Design of Experiments" by RA Fisher, and one of the paragraph made me realize the fundamental flaw in my question.
That is, even if the lady can really tell th | On Fisher's exact test: What test would have been appropriate if the lady hadn't known the number of milk-first cups?
Today, I read the first chapters of "The Design of Experiments" by RA Fisher, and one of the paragraph made me realize the fundamental flaw in my question.
That is, even if the lady can really tell the ... | On Fisher's exact test: What test would have been appropriate if the lady hadn't known the number of
Today, I read the first chapters of "The Design of Experiments" by RA Fisher, and one of the paragraph made me realize the fundamental flaw in my question.
That is, even if the lady can really tell th |
14,083 | On Fisher's exact test: What test would have been appropriate if the lady hadn't known the number of milk-first cups? | Barnard's test is used when the nuisance parameter is unknown under the null hypothesis.
However in the lady tasting test you could argue that the nuisance parameter can be set at 0.5 under the null hypothesis (the uninformed lady has 50% probability to correctly guess a cup).
Then the number of correct guesses, unde... | On Fisher's exact test: What test would have been appropriate if the lady hadn't known the number of | Barnard's test is used when the nuisance parameter is unknown under the null hypothesis.
However in the lady tasting test you could argue that the nuisance parameter can be set at 0.5 under the null | On Fisher's exact test: What test would have been appropriate if the lady hadn't known the number of milk-first cups?
Barnard's test is used when the nuisance parameter is unknown under the null hypothesis.
However in the lady tasting test you could argue that the nuisance parameter can be set at 0.5 under the null hy... | On Fisher's exact test: What test would have been appropriate if the lady hadn't known the number of
Barnard's test is used when the nuisance parameter is unknown under the null hypothesis.
However in the lady tasting test you could argue that the nuisance parameter can be set at 0.5 under the null |
14,084 | Where and why does deep learning shine? | The main purported benefits:
(1) Don't need to hand engineer features for non-linear learning problems (save time and scalable to the future, since hand engineering is seen by some as a short-term band-aid)
(2) The learnt features are sometimes better than the best hand-engineered features, and can be so complex (comp... | Where and why does deep learning shine? | The main purported benefits:
(1) Don't need to hand engineer features for non-linear learning problems (save time and scalable to the future, since hand engineering is seen by some as a short-term ban | Where and why does deep learning shine?
The main purported benefits:
(1) Don't need to hand engineer features for non-linear learning problems (save time and scalable to the future, since hand engineering is seen by some as a short-term band-aid)
(2) The learnt features are sometimes better than the best hand-engineer... | Where and why does deep learning shine?
The main purported benefits:
(1) Don't need to hand engineer features for non-linear learning problems (save time and scalable to the future, since hand engineering is seen by some as a short-term ban |
14,085 | Where and why does deep learning shine? | Another important point in addition to the above (I don't have sufficient rep to merely add it as a comment) is that it is a generative model (Deep Belief Nets at least) and thus you can sample from the learned distributions - this can have some major benefits in certain applications where you want to generate syntheti... | Where and why does deep learning shine? | Another important point in addition to the above (I don't have sufficient rep to merely add it as a comment) is that it is a generative model (Deep Belief Nets at least) and thus you can sample from t | Where and why does deep learning shine?
Another important point in addition to the above (I don't have sufficient rep to merely add it as a comment) is that it is a generative model (Deep Belief Nets at least) and thus you can sample from the learned distributions - this can have some major benefits in certain applicat... | Where and why does deep learning shine?
Another important point in addition to the above (I don't have sufficient rep to merely add it as a comment) is that it is a generative model (Deep Belief Nets at least) and thus you can sample from t |
14,086 | How to simulate from a Gaussian copula? | There is a very simple method to simulate from the Gaussian copula which is based on the definitions of the multivariate normal distribution and the Gauss copula.
I'll start by providing the required definition and properties of the multivariate normal distribution, followed by the Gaussian copula, and then I'll provid... | How to simulate from a Gaussian copula? | There is a very simple method to simulate from the Gaussian copula which is based on the definitions of the multivariate normal distribution and the Gauss copula.
I'll start by providing the required | How to simulate from a Gaussian copula?
There is a very simple method to simulate from the Gaussian copula which is based on the definitions of the multivariate normal distribution and the Gauss copula.
I'll start by providing the required definition and properties of the multivariate normal distribution, followed by t... | How to simulate from a Gaussian copula?
There is a very simple method to simulate from the Gaussian copula which is based on the definitions of the multivariate normal distribution and the Gauss copula.
I'll start by providing the required |
14,087 | MLE convergence errors with statespace SARIMAX | First, mle_retvals should be an attribute of SARIMAXResults if it is constructed using a fit call, so you should be able to check it. What do you get when you try print(res.mle_retvals)?
Second, do the estimated parameters seem "in the ballpark", or are they nonsense? Or are they NaN?
Without knowing more: you might tr... | MLE convergence errors with statespace SARIMAX | First, mle_retvals should be an attribute of SARIMAXResults if it is constructed using a fit call, so you should be able to check it. What do you get when you try print(res.mle_retvals)?
Second, do th | MLE convergence errors with statespace SARIMAX
First, mle_retvals should be an attribute of SARIMAXResults if it is constructed using a fit call, so you should be able to check it. What do you get when you try print(res.mle_retvals)?
Second, do the estimated parameters seem "in the ballpark", or are they nonsense? Or a... | MLE convergence errors with statespace SARIMAX
First, mle_retvals should be an attribute of SARIMAXResults if it is constructed using a fit call, so you should be able to check it. What do you get when you try print(res.mle_retvals)?
Second, do th |
14,088 | How few training examples is too few when training a neural network? | It really depends on your dataset, and network architecture. One rule of thumb I have read (2) was a few thousand samples per class for the neural network to start to perform very well.
In practice, people try and see. It's not rare to find studies showing decent results with a training set smaller than 1000 samples.
... | How few training examples is too few when training a neural network? | It really depends on your dataset, and network architecture. One rule of thumb I have read (2) was a few thousand samples per class for the neural network to start to perform very well.
In practice, | How few training examples is too few when training a neural network?
It really depends on your dataset, and network architecture. One rule of thumb I have read (2) was a few thousand samples per class for the neural network to start to perform very well.
In practice, people try and see. It's not rare to find studies s... | How few training examples is too few when training a neural network?
It really depends on your dataset, and network architecture. One rule of thumb I have read (2) was a few thousand samples per class for the neural network to start to perform very well.
In practice, |
14,089 | Difference in Means vs. Mean Difference | (I'm assuming you mean "sample" and not "population" in your first paragraph.)
The equivalence is easy to show mathematically. Start with two samples of equal size, $\{x_1,\dots,x_n\}$ and $\{y_1,\dots,y_n\}$. Then define $$\begin{align}
\bar x &= \frac{1}{n} \sum_{i=1}^n x_i \\
\bar y &= \frac{1}{n} \sum_{i=1}^n y_i \... | Difference in Means vs. Mean Difference | (I'm assuming you mean "sample" and not "population" in your first paragraph.)
The equivalence is easy to show mathematically. Start with two samples of equal size, $\{x_1,\dots,x_n\}$ and $\{y_1,\dot | Difference in Means vs. Mean Difference
(I'm assuming you mean "sample" and not "population" in your first paragraph.)
The equivalence is easy to show mathematically. Start with two samples of equal size, $\{x_1,\dots,x_n\}$ and $\{y_1,\dots,y_n\}$. Then define $$\begin{align}
\bar x &= \frac{1}{n} \sum_{i=1}^n x_i \\
... | Difference in Means vs. Mean Difference
(I'm assuming you mean "sample" and not "population" in your first paragraph.)
The equivalence is easy to show mathematically. Start with two samples of equal size, $\{x_1,\dots,x_n\}$ and $\{y_1,\dot |
14,090 | Difference in Means vs. Mean Difference | the distribution of the mean difference should be tighter then the distribution of the difference of means. See this with an easy example:
mean in sample 1: 1 10 100 1000
mean in sample 2: 2 11 102 1000
difference of means is 1 1 2 0 (unlike samples itself) has small std. | Difference in Means vs. Mean Difference | the distribution of the mean difference should be tighter then the distribution of the difference of means. See this with an easy example:
mean in sample 1: 1 10 100 1000
mean in sample 2: 2 11 102 10 | Difference in Means vs. Mean Difference
the distribution of the mean difference should be tighter then the distribution of the difference of means. See this with an easy example:
mean in sample 1: 1 10 100 1000
mean in sample 2: 2 11 102 1000
difference of means is 1 1 2 0 (unlike samples itself) has small std. | Difference in Means vs. Mean Difference
the distribution of the mean difference should be tighter then the distribution of the difference of means. See this with an easy example:
mean in sample 1: 1 10 100 1000
mean in sample 2: 2 11 102 10 |
14,091 | What are the theoretical guarantees of bagging | The main use-case for bagging is reducing variance of low-biased models by bunching them together. This was studied empirically in the landmark paper "An Empirical Comparison of Voting Classification
Algorithms: Bagging, Boosting, and Variants" by Bauer and Kohavi. It usually works as advertised.
However, contrary to p... | What are the theoretical guarantees of bagging | The main use-case for bagging is reducing variance of low-biased models by bunching them together. This was studied empirically in the landmark paper "An Empirical Comparison of Voting Classification
| What are the theoretical guarantees of bagging
The main use-case for bagging is reducing variance of low-biased models by bunching them together. This was studied empirically in the landmark paper "An Empirical Comparison of Voting Classification
Algorithms: Bagging, Boosting, and Variants" by Bauer and Kohavi. It usua... | What are the theoretical guarantees of bagging
The main use-case for bagging is reducing variance of low-biased models by bunching them together. This was studied empirically in the landmark paper "An Empirical Comparison of Voting Classification
|
14,092 | Good resources (online or book) on the mathematical foundations of statistics | Maths:
Grinstead & Snell, Introduction to Probability (it's free)
Strang, Introduction to Linear Algebra
Strang, Calculus
Also check out Strang on MIT OpenCourseWare.
Statistical theory (it's more than just maths):
Cox, Principles of Statistical Inference
Cox & Hinkley, Theoretical Statistics
Geisser, Modes of Parametr... | Good resources (online or book) on the mathematical foundations of statistics | Maths:
Grinstead & Snell, Introduction to Probability (it's free)
Strang, Introduction to Linear Algebra
Strang, Calculus
Also check out Strang on MIT OpenCourseWare.
Statistical theory (it's more tha | Good resources (online or book) on the mathematical foundations of statistics
Maths:
Grinstead & Snell, Introduction to Probability (it's free)
Strang, Introduction to Linear Algebra
Strang, Calculus
Also check out Strang on MIT OpenCourseWare.
Statistical theory (it's more than just maths):
Cox, Principles of Statisti... | Good resources (online or book) on the mathematical foundations of statistics
Maths:
Grinstead & Snell, Introduction to Probability (it's free)
Strang, Introduction to Linear Algebra
Strang, Calculus
Also check out Strang on MIT OpenCourseWare.
Statistical theory (it's more tha |
14,093 | Good resources (online or book) on the mathematical foundations of statistics | Some important mathematical statistics topics are:
Exponential family and sufficiency.
Estimator construction.
Hypothesis testing.
References regarding mathematical statistics:
Mood, A. M., Graybill, F. A., & Boes, D. C. (1974). Introduction to theory of statistics. (B. C. Harrinson & M. Eichberg, Eds.) (3rd ed., ... | Good resources (online or book) on the mathematical foundations of statistics | Some important mathematical statistics topics are:
Exponential family and sufficiency.
Estimator construction.
Hypothesis testing.
References regarding mathematical statistics:
Mood, A. M., Grayb | Good resources (online or book) on the mathematical foundations of statistics
Some important mathematical statistics topics are:
Exponential family and sufficiency.
Estimator construction.
Hypothesis testing.
References regarding mathematical statistics:
Mood, A. M., Graybill, F. A., & Boes, D. C. (1974). Introduc... | Good resources (online or book) on the mathematical foundations of statistics
Some important mathematical statistics topics are:
Exponential family and sufficiency.
Estimator construction.
Hypothesis testing.
References regarding mathematical statistics:
Mood, A. M., Grayb |
14,094 | Good resources (online or book) on the mathematical foundations of statistics | Want to improve this post? Provide detailed answers to this question, including citations and an explanation of why your answer is correct. Answers without enough detail may be edited or deleted.
Have a look at the Mathematical Biostatistics Bootcamp... | Good resources (online or book) on the mathematical foundations of statistics | Want to improve this post? Provide detailed answers to this question, including citations and an explanation of why your answer is correct. Answers without enough detail may be edited or deleted.
| Good resources (online or book) on the mathematical foundations of statistics
Want to improve this post? Provide detailed answers to this question, including citations and an explanation of why your answer is correct. Answers without enough detail may be edited or deleted.
... | Good resources (online or book) on the mathematical foundations of statistics
Want to improve this post? Provide detailed answers to this question, including citations and an explanation of why your answer is correct. Answers without enough detail may be edited or deleted.
|
14,095 | Good resources (online or book) on the mathematical foundations of statistics | SEM is (in my opinion) very far removed from traditional probability theory and some basic statistical techniques that extend easily from it (such as point estimation, large sample theory, and Bayesian statistics). I think SEM is the result of a great deal of abstraction from such methods. I furthermore think that the ... | Good resources (online or book) on the mathematical foundations of statistics | SEM is (in my opinion) very far removed from traditional probability theory and some basic statistical techniques that extend easily from it (such as point estimation, large sample theory, and Bayesia | Good resources (online or book) on the mathematical foundations of statistics
SEM is (in my opinion) very far removed from traditional probability theory and some basic statistical techniques that extend easily from it (such as point estimation, large sample theory, and Bayesian statistics). I think SEM is the result o... | Good resources (online or book) on the mathematical foundations of statistics
SEM is (in my opinion) very far removed from traditional probability theory and some basic statistical techniques that extend easily from it (such as point estimation, large sample theory, and Bayesia |
14,096 | Which diagnostics can validate the use of a particular family of GLM? | I have some tips :
(1) How residuals ought to compare to fits isn't always all that obvious, so it's good to be familiar with diagnostics for particular models. In logistic regression models, for example, the Hosmer-Lemeshow statistic is used to assess goodness of fit; leverage values tend to be small where the estima... | Which diagnostics can validate the use of a particular family of GLM? | I have some tips :
(1) How residuals ought to compare to fits isn't always all that obvious, so it's good to be familiar with diagnostics for particular models. In logistic regression models, for exa | Which diagnostics can validate the use of a particular family of GLM?
I have some tips :
(1) How residuals ought to compare to fits isn't always all that obvious, so it's good to be familiar with diagnostics for particular models. In logistic regression models, for example, the Hosmer-Lemeshow statistic is used to ass... | Which diagnostics can validate the use of a particular family of GLM?
I have some tips :
(1) How residuals ought to compare to fits isn't always all that obvious, so it's good to be familiar with diagnostics for particular models. In logistic regression models, for exa |
14,097 | Which diagnostics can validate the use of a particular family of GLM? | You may find it interesting to read the vignette (introductory manual) for the R package fitdistrplus. I recognize that you prefer to work in Stata, but I think the vignette will be sufficiently self-explanatory that you can get some insights into the process of inferring distributional families from data. You will p... | Which diagnostics can validate the use of a particular family of GLM? | You may find it interesting to read the vignette (introductory manual) for the R package fitdistrplus. I recognize that you prefer to work in Stata, but I think the vignette will be sufficiently self | Which diagnostics can validate the use of a particular family of GLM?
You may find it interesting to read the vignette (introductory manual) for the R package fitdistrplus. I recognize that you prefer to work in Stata, but I think the vignette will be sufficiently self-explanatory that you can get some insights into t... | Which diagnostics can validate the use of a particular family of GLM?
You may find it interesting to read the vignette (introductory manual) for the R package fitdistrplus. I recognize that you prefer to work in Stata, but I think the vignette will be sufficiently self |
14,098 | Should confidence intervals for linear regression coefficients be based on the normal or $t$ distribution? | (1) When the errors are normally distributed and their variance is not known, then $$\frac{\hat{\beta} - \beta_0}{{\rm se}(\hat{\beta})}$$ has a $t$-distribution under the null hypothesis that $\beta_0$ is the true regression coefficient. The default in R is to test $\beta_0 = 0$, so the $t$-statistics reported there a... | Should confidence intervals for linear regression coefficients be based on the normal or $t$ distrib | (1) When the errors are normally distributed and their variance is not known, then $$\frac{\hat{\beta} - \beta_0}{{\rm se}(\hat{\beta})}$$ has a $t$-distribution under the null hypothesis that $\beta_ | Should confidence intervals for linear regression coefficients be based on the normal or $t$ distribution?
(1) When the errors are normally distributed and their variance is not known, then $$\frac{\hat{\beta} - \beta_0}{{\rm se}(\hat{\beta})}$$ has a $t$-distribution under the null hypothesis that $\beta_0$ is the tru... | Should confidence intervals for linear regression coefficients be based on the normal or $t$ distrib
(1) When the errors are normally distributed and their variance is not known, then $$\frac{\hat{\beta} - \beta_0}{{\rm se}(\hat{\beta})}$$ has a $t$-distribution under the null hypothesis that $\beta_ |
14,099 | How to add two dependent random variables? | As vinux points out, one needs the joint distribution of $A$ and $B$, and
it is not obvious from OP Mesko's response "I know Distributive function of A and B"
that he is saying he knows the joint distribution of A and B: he may well
be saying that he knows the marginal distributions of A and B. However,
assuming that... | How to add two dependent random variables? | As vinux points out, one needs the joint distribution of $A$ and $B$, and
it is not obvious from OP Mesko's response "I know Distributive function of A and B"
that he is saying he knows the joint dis | How to add two dependent random variables?
As vinux points out, one needs the joint distribution of $A$ and $B$, and
it is not obvious from OP Mesko's response "I know Distributive function of A and B"
that he is saying he knows the joint distribution of A and B: he may well
be saying that he knows the marginal distri... | How to add two dependent random variables?
As vinux points out, one needs the joint distribution of $A$ and $B$, and
it is not obvious from OP Mesko's response "I know Distributive function of A and B"
that he is saying he knows the joint dis |
14,100 | How to add two dependent random variables? | Beforehand , I don't know if what i'm saying is correct but I got stuck on the same problem and I tried to solve it in this way:
express the joint distribution using the heaviside step function:
$$
f_{A,B}(a,b)=(a+b) H(a,b) H(-a+1,-b+1)
$$
or equivalently
$$
f_{A,B}(a,b)=(a+b)(H(a)-H(a-1))(H(b)-H(b-1))
$$
Now you can ... | How to add two dependent random variables? | Beforehand , I don't know if what i'm saying is correct but I got stuck on the same problem and I tried to solve it in this way:
express the joint distribution using the heaviside step function:
$$
f_ | How to add two dependent random variables?
Beforehand , I don't know if what i'm saying is correct but I got stuck on the same problem and I tried to solve it in this way:
express the joint distribution using the heaviside step function:
$$
f_{A,B}(a,b)=(a+b) H(a,b) H(-a+1,-b+1)
$$
or equivalently
$$
f_{A,B}(a,b)=(a+b... | How to add two dependent random variables?
Beforehand , I don't know if what i'm saying is correct but I got stuck on the same problem and I tried to solve it in this way:
express the joint distribution using the heaviside step function:
$$
f_ |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.