idx
int64
1
56k
question
stringlengths
15
155
answer
stringlengths
2
29.2k
question_cut
stringlengths
15
100
answer_cut
stringlengths
2
200
conversation
stringlengths
47
29.3k
conversation_cut
stringlengths
47
301
44,601
Bayesian inference and testable implications
I'm not a Bayesian expert and I'm happy to stand corrected, but to me the most straightforward & principled way to test this would be to define region of practical equivalence (ROPE) around c and then estimate how much posterior density falls inside this region. For example, let's say that, based on theory and domain k...
Bayesian inference and testable implications
I'm not a Bayesian expert and I'm happy to stand corrected, but to me the most straightforward & principled way to test this would be to define region of practical equivalence (ROPE) around c and then
Bayesian inference and testable implications I'm not a Bayesian expert and I'm happy to stand corrected, but to me the most straightforward & principled way to test this would be to define region of practical equivalence (ROPE) around c and then estimate how much posterior density falls inside this region. For example,...
Bayesian inference and testable implications I'm not a Bayesian expert and I'm happy to stand corrected, but to me the most straightforward & principled way to test this would be to define region of practical equivalence (ROPE) around c and then
44,602
Bayesian inference and testable implications
EDIT: innisfree is right. Bayes factors seem like a better approach than what I have provided here. I'm leaving it up for posterity, but it isn't the right approach. Because this problem really relies on a single assertion (namely, that $c$ has some value), we can simply estimate the following model $$ y \sim \mathca...
Bayesian inference and testable implications
EDIT: innisfree is right. Bayes factors seem like a better approach than what I have provided here. I'm leaving it up for posterity, but it isn't the right approach. Because this problem really reli
Bayesian inference and testable implications EDIT: innisfree is right. Bayes factors seem like a better approach than what I have provided here. I'm leaving it up for posterity, but it isn't the right approach. Because this problem really relies on a single assertion (namely, that $c$ has some value), we can simply e...
Bayesian inference and testable implications EDIT: innisfree is right. Bayes factors seem like a better approach than what I have provided here. I'm leaving it up for posterity, but it isn't the right approach. Because this problem really reli
44,603
Which data is "more normal"?
If you want to quantify departure from normality, then a good measure is the Kolmogorov-Smirnov test statistic $D.$ Let's compare two samples of size $n = 5000.$ The sample x below it taken using an excellent algorithm in R that is known to sample from an essentially perfect normal population, $\mathsf{Norm}(\mu=1.5,...
Which data is "more normal"?
If you want to quantify departure from normality, then a good measure is the Kolmogorov-Smirnov test statistic $D.$ Let's compare two samples of size $n = 5000.$ The sample x below it taken using an
Which data is "more normal"? If you want to quantify departure from normality, then a good measure is the Kolmogorov-Smirnov test statistic $D.$ Let's compare two samples of size $n = 5000.$ The sample x below it taken using an excellent algorithm in R that is known to sample from an essentially perfect normal popula...
Which data is "more normal"? If you want to quantify departure from normality, then a good measure is the Kolmogorov-Smirnov test statistic $D.$ Let's compare two samples of size $n = 5000.$ The sample x below it taken using an
44,604
Which data is "more normal"?
Let us begin with the assumption that you have data collected across time that is drawn from a normal distribution. If it is, then the frequency is irrelevant even if one level of frequency looks nicer than another. That is due to Donsker's Theorem. As to My question is, is it valid to say that on the basis of a lo...
Which data is "more normal"?
Let us begin with the assumption that you have data collected across time that is drawn from a normal distribution. If it is, then the frequency is irrelevant even if one level of frequency looks nic
Which data is "more normal"? Let us begin with the assumption that you have data collected across time that is drawn from a normal distribution. If it is, then the frequency is irrelevant even if one level of frequency looks nicer than another. That is due to Donsker's Theorem. As to My question is, is it valid to ...
Which data is "more normal"? Let us begin with the assumption that you have data collected across time that is drawn from a normal distribution. If it is, then the frequency is irrelevant even if one level of frequency looks nic
44,605
Unbiased estimator of $\lambda(1 - e^\lambda)$ when $x_1,\ldots,x_n$ are i.i.d Poisson($\lambda$)
Actually, an unbiased estimator does exist. Let us define $\tau = \lambda e^\lambda$ so that $$\lambda(1-e^\lambda) = \lambda - \tau$$ Since the sample mean $\bar{X}$ is unbiased for $\lambda$, really all we need is an unbiased estimator for $\tau$. An obvious starting place is to use the invariance property of the MLE...
Unbiased estimator of $\lambda(1 - e^\lambda)$ when $x_1,\ldots,x_n$ are i.i.d Poisson($\lambda$)
Actually, an unbiased estimator does exist. Let us define $\tau = \lambda e^\lambda$ so that $$\lambda(1-e^\lambda) = \lambda - \tau$$ Since the sample mean $\bar{X}$ is unbiased for $\lambda$, really
Unbiased estimator of $\lambda(1 - e^\lambda)$ when $x_1,\ldots,x_n$ are i.i.d Poisson($\lambda$) Actually, an unbiased estimator does exist. Let us define $\tau = \lambda e^\lambda$ so that $$\lambda(1-e^\lambda) = \lambda - \tau$$ Since the sample mean $\bar{X}$ is unbiased for $\lambda$, really all we need is an unb...
Unbiased estimator of $\lambda(1 - e^\lambda)$ when $x_1,\ldots,x_n$ are i.i.d Poisson($\lambda$) Actually, an unbiased estimator does exist. Let us define $\tau = \lambda e^\lambda$ so that $$\lambda(1-e^\lambda) = \lambda - \tau$$ Since the sample mean $\bar{X}$ is unbiased for $\lambda$, really
44,606
Hyperparameter Optimization Using Gaussian Processes
As a result of doing that you will also overfit the validation set (the more so the more you tuned the hyperparameters - if you tried two or three configurations, the effect is less than if you did some systematic search e.g. using the Gaussian process approach). The standard solution to this would be to not just have ...
Hyperparameter Optimization Using Gaussian Processes
As a result of doing that you will also overfit the validation set (the more so the more you tuned the hyperparameters - if you tried two or three configurations, the effect is less than if you did so
Hyperparameter Optimization Using Gaussian Processes As a result of doing that you will also overfit the validation set (the more so the more you tuned the hyperparameters - if you tried two or three configurations, the effect is less than if you did some systematic search e.g. using the Gaussian process approach). The...
Hyperparameter Optimization Using Gaussian Processes As a result of doing that you will also overfit the validation set (the more so the more you tuned the hyperparameters - if you tried two or three configurations, the effect is less than if you did so
44,607
Why does non-parametric bootstrap not return the same sample over and over again?
Each member of the bootstrap sample is selected randomly with replacement from the data set. If we were to sample without replacement, then every sample would simply be a re-ordering of the same data. But, as a consequence of replacement, the bootstrap samples differ in how many times they include each data point (whic...
Why does non-parametric bootstrap not return the same sample over and over again?
Each member of the bootstrap sample is selected randomly with replacement from the data set. If we were to sample without replacement, then every sample would simply be a re-ordering of the same data.
Why does non-parametric bootstrap not return the same sample over and over again? Each member of the bootstrap sample is selected randomly with replacement from the data set. If we were to sample without replacement, then every sample would simply be a re-ordering of the same data. But, as a consequence of replacement,...
Why does non-parametric bootstrap not return the same sample over and over again? Each member of the bootstrap sample is selected randomly with replacement from the data set. If we were to sample without replacement, then every sample would simply be a re-ordering of the same data.
44,608
Why does non-parametric bootstrap not return the same sample over and over again?
@user20160's explanation is fine. Here's an example of 10 bootstrap samples of the sequence from 1 to 5, showing that some values will be represented more than once and other values will not be represented (x <- 1:5; t(replicate(10,sort(sample(x,replace=TRUE))))) [,1] [,2] [,3] [,4] [,5] [1,] 2 2 4 4...
Why does non-parametric bootstrap not return the same sample over and over again?
@user20160's explanation is fine. Here's an example of 10 bootstrap samples of the sequence from 1 to 5, showing that some values will be represented more than once and other values will not be repres
Why does non-parametric bootstrap not return the same sample over and over again? @user20160's explanation is fine. Here's an example of 10 bootstrap samples of the sequence from 1 to 5, showing that some values will be represented more than once and other values will not be represented (x <- 1:5; t(replicate(10,sort(s...
Why does non-parametric bootstrap not return the same sample over and over again? @user20160's explanation is fine. Here's an example of 10 bootstrap samples of the sequence from 1 to 5, showing that some values will be represented more than once and other values will not be repres
44,609
Why does non-parametric bootstrap not return the same sample over and over again?
Just to confirm the answers here, the key misunderstanding is the questioner believes there is no replacement in the sampling. Thus if there are 10 elements and 10 random sampling events and 2 replications, each replication is identical to the other without replacement. The number of random sampling events can never ex...
Why does non-parametric bootstrap not return the same sample over and over again?
Just to confirm the answers here, the key misunderstanding is the questioner believes there is no replacement in the sampling. Thus if there are 10 elements and 10 random sampling events and 2 replica
Why does non-parametric bootstrap not return the same sample over and over again? Just to confirm the answers here, the key misunderstanding is the questioner believes there is no replacement in the sampling. Thus if there are 10 elements and 10 random sampling events and 2 replications, each replication is identical t...
Why does non-parametric bootstrap not return the same sample over and over again? Just to confirm the answers here, the key misunderstanding is the questioner believes there is no replacement in the sampling. Thus if there are 10 elements and 10 random sampling events and 2 replica
44,610
Should I gloss over the linear algebra chapter in the book "Deep Learning" by Ian Goodfellow?
This is a question that often pops up when reading mathematical literature. The initial chapters, of this book or any other math book, lay out tools that you will be using in later chapters, so strictly speaking, you will not understand the rest of the book without understanding these foundational chapters. Realistical...
Should I gloss over the linear algebra chapter in the book "Deep Learning" by Ian Goodfellow?
This is a question that often pops up when reading mathematical literature. The initial chapters, of this book or any other math book, lay out tools that you will be using in later chapters, so strict
Should I gloss over the linear algebra chapter in the book "Deep Learning" by Ian Goodfellow? This is a question that often pops up when reading mathematical literature. The initial chapters, of this book or any other math book, lay out tools that you will be using in later chapters, so strictly speaking, you will not ...
Should I gloss over the linear algebra chapter in the book "Deep Learning" by Ian Goodfellow? This is a question that often pops up when reading mathematical literature. The initial chapters, of this book or any other math book, lay out tools that you will be using in later chapters, so strict
44,611
Is it wise to use predicted values to model predicted values further down the line?
I will answer your questions in reverse order: 2) Your approach is correct. This is called recursive forecasting: Generate a forecast for one step ahead $\hat{y}_{t+1} = f(y_t)$, then use that to generate a forecast for two steps ahead $\hat{y}_{t+2} = f(\hat{y}_{t+1})$, etc...until you have $\hat{y}_{T}$ for your des...
Is it wise to use predicted values to model predicted values further down the line?
I will answer your questions in reverse order: 2) Your approach is correct. This is called recursive forecasting: Generate a forecast for one step ahead $\hat{y}_{t+1} = f(y_t)$, then use that to gene
Is it wise to use predicted values to model predicted values further down the line? I will answer your questions in reverse order: 2) Your approach is correct. This is called recursive forecasting: Generate a forecast for one step ahead $\hat{y}_{t+1} = f(y_t)$, then use that to generate a forecast for two steps ahead ...
Is it wise to use predicted values to model predicted values further down the line? I will answer your questions in reverse order: 2) Your approach is correct. This is called recursive forecasting: Generate a forecast for one step ahead $\hat{y}_{t+1} = f(y_t)$, then use that to gene
44,612
Is it wise to use predicted values to model predicted values further down the line?
(1) You should "mix" the approaches by using a model that captures both features. When your data shows multiple features (e.g., drift and seasonality) it is a good idea to use a model that captures all of these features together. This is preferable to attempting to make ad hoc changes to a model that only captures on...
Is it wise to use predicted values to model predicted values further down the line?
(1) You should "mix" the approaches by using a model that captures both features. When your data shows multiple features (e.g., drift and seasonality) it is a good idea to use a model that captures a
Is it wise to use predicted values to model predicted values further down the line? (1) You should "mix" the approaches by using a model that captures both features. When your data shows multiple features (e.g., drift and seasonality) it is a good idea to use a model that captures all of these features together. This...
Is it wise to use predicted values to model predicted values further down the line? (1) You should "mix" the approaches by using a model that captures both features. When your data shows multiple features (e.g., drift and seasonality) it is a good idea to use a model that captures a
44,613
Proving $P(|X+Y|\leq 2|X|) > \dfrac{1}{2}$
I think it's much easier to solve this problem using the triangle inequality rather than using a squaring approach. Since $|X+Y| \le |X| + |Y|$, we have that $$P(|X+Y|\le2|X|) \ge P(|X|+|Y|≤2|X|)=P(|Y|≤|X|)=1/2$$ Do you specifically need to show that the probability is greater than 1/2?
Proving $P(|X+Y|\leq 2|X|) > \dfrac{1}{2}$
I think it's much easier to solve this problem using the triangle inequality rather than using a squaring approach. Since $|X+Y| \le |X| + |Y|$, we have that $$P(|X+Y|\le2|X|) \ge P(|X|+|Y|≤2|X|)=P(|Y
Proving $P(|X+Y|\leq 2|X|) > \dfrac{1}{2}$ I think it's much easier to solve this problem using the triangle inequality rather than using a squaring approach. Since $|X+Y| \le |X| + |Y|$, we have that $$P(|X+Y|\le2|X|) \ge P(|X|+|Y|≤2|X|)=P(|Y|≤|X|)=1/2$$ Do you specifically need to show that the probability is greater...
Proving $P(|X+Y|\leq 2|X|) > \dfrac{1}{2}$ I think it's much easier to solve this problem using the triangle inequality rather than using a squaring approach. Since $|X+Y| \le |X| + |Y|$, we have that $$P(|X+Y|\le2|X|) \ge P(|X|+|Y|≤2|X|)=P(|Y
44,614
Proving $P(|X+Y|\leq 2|X|) > \dfrac{1}{2}$
If you draw the lines Y = X and Y = -3X, the first has slope 1, and the second has slope -3. The two lines divide the plane into four quadrants, with the solution set being the left and right quadrants. So you just have to show that more than half of the probability mass is in those two quadrants. Call the two quadrant...
Proving $P(|X+Y|\leq 2|X|) > \dfrac{1}{2}$
If you draw the lines Y = X and Y = -3X, the first has slope 1, and the second has slope -3. The two lines divide the plane into four quadrants, with the solution set being the left and right quadrant
Proving $P(|X+Y|\leq 2|X|) > \dfrac{1}{2}$ If you draw the lines Y = X and Y = -3X, the first has slope 1, and the second has slope -3. The two lines divide the plane into four quadrants, with the solution set being the left and right quadrants. So you just have to show that more than half of the probability mass is in...
Proving $P(|X+Y|\leq 2|X|) > \dfrac{1}{2}$ If you draw the lines Y = X and Y = -3X, the first has slope 1, and the second has slope -3. The two lines divide the plane into four quadrants, with the solution set being the left and right quadrant
44,615
Proving $P(|X+Y|\leq 2|X|) > \dfrac{1}{2}$
The image below demonstrates how a partitioning of the area $|x+y| \leq 2|x|$ helps to proof $P[|X+Y| \leq 2|X|] > \frac{1}{2}$. The hatched region (region 2) corresponds to your area $$|x+y| \leq 2|x| \qquad \text{or} \qquad (y-x)(3x+y) \leq 0 $$ Part of this region (the pink colored hatched region, region 2a) is a m...
Proving $P(|X+Y|\leq 2|X|) > \dfrac{1}{2}$
The image below demonstrates how a partitioning of the area $|x+y| \leq 2|x|$ helps to proof $P[|X+Y| \leq 2|X|] > \frac{1}{2}$. The hatched region (region 2) corresponds to your area $$|x+y| \leq 2|x
Proving $P(|X+Y|\leq 2|X|) > \dfrac{1}{2}$ The image below demonstrates how a partitioning of the area $|x+y| \leq 2|x|$ helps to proof $P[|X+Y| \leq 2|X|] > \frac{1}{2}$. The hatched region (region 2) corresponds to your area $$|x+y| \leq 2|x| \qquad \text{or} \qquad (y-x)(3x+y) \leq 0 $$ Part of this region (the pin...
Proving $P(|X+Y|\leq 2|X|) > \dfrac{1}{2}$ The image below demonstrates how a partitioning of the area $|x+y| \leq 2|x|$ helps to proof $P[|X+Y| \leq 2|X|] > \frac{1}{2}$. The hatched region (region 2) corresponds to your area $$|x+y| \leq 2|x
44,616
Proving $P(|X+Y|\leq 2|X|) > \dfrac{1}{2}$
Changing this probability into an expectation is the key to solving this problem easily. As noted by @Accumulation and @Martijn Weterings, the absolute value function divides the (x,y)-plane into conic regions. Note that $$ P(|X+Y|<2|X|)=E[I(|X+Y|<2|X|)] $$ where $I()$ is the usual zero-one indicator function. (Note th...
Proving $P(|X+Y|\leq 2|X|) > \dfrac{1}{2}$
Changing this probability into an expectation is the key to solving this problem easily. As noted by @Accumulation and @Martijn Weterings, the absolute value function divides the (x,y)-plane into coni
Proving $P(|X+Y|\leq 2|X|) > \dfrac{1}{2}$ Changing this probability into an expectation is the key to solving this problem easily. As noted by @Accumulation and @Martijn Weterings, the absolute value function divides the (x,y)-plane into conic regions. Note that $$ P(|X+Y|<2|X|)=E[I(|X+Y|<2|X|)] $$ where $I()$ is the ...
Proving $P(|X+Y|\leq 2|X|) > \dfrac{1}{2}$ Changing this probability into an expectation is the key to solving this problem easily. As noted by @Accumulation and @Martijn Weterings, the absolute value function divides the (x,y)-plane into coni
44,617
two questions; how to interpret the AUROC (area under the ROC curve)
Would it be correct to say that there is 85% chance that $A$ has the disease? No. Assuming your model is correct and well-calibrated, the probability that $A$ has the disease is the model's estimate that $A$ has the disease. The meaning of AUROC (area under the ROC curve, to distinguish from the less-common area unde...
two questions; how to interpret the AUROC (area under the ROC curve)
Would it be correct to say that there is 85% chance that $A$ has the disease? No. Assuming your model is correct and well-calibrated, the probability that $A$ has the disease is the model's estimate
two questions; how to interpret the AUROC (area under the ROC curve) Would it be correct to say that there is 85% chance that $A$ has the disease? No. Assuming your model is correct and well-calibrated, the probability that $A$ has the disease is the model's estimate that $A$ has the disease. The meaning of AUROC (ar...
two questions; how to interpret the AUROC (area under the ROC curve) Would it be correct to say that there is 85% chance that $A$ has the disease? No. Assuming your model is correct and well-calibrated, the probability that $A$ has the disease is the model's estimate
44,618
two questions; how to interpret the AUROC (area under the ROC curve)
If the regression model gives me a subject AA with a predicted probability of 0.6 and this seems to be a high probability compared to other subjects. Would it be correct to say that there is 85% chance that AA has the disease? The answer is "no". The AUROC does not care about the actual value of your probability predi...
two questions; how to interpret the AUROC (area under the ROC curve)
If the regression model gives me a subject AA with a predicted probability of 0.6 and this seems to be a high probability compared to other subjects. Would it be correct to say that there is 85% chanc
two questions; how to interpret the AUROC (area under the ROC curve) If the regression model gives me a subject AA with a predicted probability of 0.6 and this seems to be a high probability compared to other subjects. Would it be correct to say that there is 85% chance that AA has the disease? The answer is "no". The...
two questions; how to interpret the AUROC (area under the ROC curve) If the regression model gives me a subject AA with a predicted probability of 0.6 and this seems to be a high probability compared to other subjects. Would it be correct to say that there is 85% chanc
44,619
Given an adjacency matrix, how can we fit a covariance matrix based on that for a graph without running into a NON-positive definite matrix?
Yes, for example if you choose $\rho$ small enough to ensure that your matrix is strictly diagonally dominant, then it is guaranteed to be positive definite. In this case "small enough" means $|\rho|<1/r$, where $r$ is the valency of the regular graph. But possibly you do not want to choose $\rho$ so small. A useful th...
Given an adjacency matrix, how can we fit a covariance matrix based on that for a graph without runn
Yes, for example if you choose $\rho$ small enough to ensure that your matrix is strictly diagonally dominant, then it is guaranteed to be positive definite. In this case "small enough" means $|\rho|<
Given an adjacency matrix, how can we fit a covariance matrix based on that for a graph without running into a NON-positive definite matrix? Yes, for example if you choose $\rho$ small enough to ensure that your matrix is strictly diagonally dominant, then it is guaranteed to be positive definite. In this case "small e...
Given an adjacency matrix, how can we fit a covariance matrix based on that for a graph without runn Yes, for example if you choose $\rho$ small enough to ensure that your matrix is strictly diagonally dominant, then it is guaranteed to be positive definite. In this case "small enough" means $|\rho|<
44,620
Given an adjacency matrix, how can we fit a covariance matrix based on that for a graph without running into a NON-positive definite matrix?
Here is an explanation which might provide some intuition about what is going on here. Suppose that in your graph you have three vertices where vertex 1 is adjacent to both vertices 2 and 3, but vertices 2 and 3 are not adjacent to each other. Let $X_1$, $X_2$, and $X_3$ be the corresponding random variables being mode...
Given an adjacency matrix, how can we fit a covariance matrix based on that for a graph without runn
Here is an explanation which might provide some intuition about what is going on here. Suppose that in your graph you have three vertices where vertex 1 is adjacent to both vertices 2 and 3, but verti
Given an adjacency matrix, how can we fit a covariance matrix based on that for a graph without running into a NON-positive definite matrix? Here is an explanation which might provide some intuition about what is going on here. Suppose that in your graph you have three vertices where vertex 1 is adjacent to both vertic...
Given an adjacency matrix, how can we fit a covariance matrix based on that for a graph without runn Here is an explanation which might provide some intuition about what is going on here. Suppose that in your graph you have three vertices where vertex 1 is adjacent to both vertices 2 and 3, but verti
44,621
Given an adjacency matrix, how can we fit a covariance matrix based on that for a graph without running into a NON-positive definite matrix?
For the special case of precision matrixes $K = \Sigma^{-1}$, some approaches tend to use the condition number theory (Article Condition number on Wikipedia). It helps to find a constant by which diagonal elements can be multiplied when the obtained matrix is not positive defined. The graph2prec function in SpiecEasi (...
Given an adjacency matrix, how can we fit a covariance matrix based on that for a graph without runn
For the special case of precision matrixes $K = \Sigma^{-1}$, some approaches tend to use the condition number theory (Article Condition number on Wikipedia). It helps to find a constant by which diag
Given an adjacency matrix, how can we fit a covariance matrix based on that for a graph without running into a NON-positive definite matrix? For the special case of precision matrixes $K = \Sigma^{-1}$, some approaches tend to use the condition number theory (Article Condition number on Wikipedia). It helps to find a c...
Given an adjacency matrix, how can we fit a covariance matrix based on that for a graph without runn For the special case of precision matrixes $K = \Sigma^{-1}$, some approaches tend to use the condition number theory (Article Condition number on Wikipedia). It helps to find a constant by which diag
44,622
Does it make sense to interact 2 dummy variables?
Sure, you can include an interaction between categorical variables in your regression. The interpretation is particularly easy if the categorical variables are binary (i.e. have only two categories). Let's look at your example and how to interpret it. You did only tell us one of the binary variables, $\mathrm{Education...
Does it make sense to interact 2 dummy variables?
Sure, you can include an interaction between categorical variables in your regression. The interpretation is particularly easy if the categorical variables are binary (i.e. have only two categories).
Does it make sense to interact 2 dummy variables? Sure, you can include an interaction between categorical variables in your regression. The interpretation is particularly easy if the categorical variables are binary (i.e. have only two categories). Let's look at your example and how to interpret it. You did only tell ...
Does it make sense to interact 2 dummy variables? Sure, you can include an interaction between categorical variables in your regression. The interpretation is particularly easy if the categorical variables are binary (i.e. have only two categories).
44,623
Does it make sense to interact 2 dummy variables?
It makes sense, but only in situation when there are all possible combinations of those variables in data, that is: when there are cases that have 0;0, 0;1, 1;0 and 1;1 in your data (first number means variable of first possible dummy variable, and second number means value of second variable). In such a situation ther...
Does it make sense to interact 2 dummy variables?
It makes sense, but only in situation when there are all possible combinations of those variables in data, that is: when there are cases that have 0;0, 0;1, 1;0 and 1;1 in your data (first number mean
Does it make sense to interact 2 dummy variables? It makes sense, but only in situation when there are all possible combinations of those variables in data, that is: when there are cases that have 0;0, 0;1, 1;0 and 1;1 in your data (first number means variable of first possible dummy variable, and second number means v...
Does it make sense to interact 2 dummy variables? It makes sense, but only in situation when there are all possible combinations of those variables in data, that is: when there are cases that have 0;0, 0;1, 1;0 and 1;1 in your data (first number mean
44,624
Does it make sense to interact 2 dummy variables?
I don't know if anyone else can chime in here with a better answer, but I have seen this. There's a lot of debate as to whether to include interaction terms at all, but it is possible with 2 binary variables. You didn't tell us what the binary variables were, so it's hard to answer your question. But let's say if on...
Does it make sense to interact 2 dummy variables?
I don't know if anyone else can chime in here with a better answer, but I have seen this. There's a lot of debate as to whether to include interaction terms at all, but it is possible with 2 binary v
Does it make sense to interact 2 dummy variables? I don't know if anyone else can chime in here with a better answer, but I have seen this. There's a lot of debate as to whether to include interaction terms at all, but it is possible with 2 binary variables. You didn't tell us what the binary variables were, so it's...
Does it make sense to interact 2 dummy variables? I don't know if anyone else can chime in here with a better answer, but I have seen this. There's a lot of debate as to whether to include interaction terms at all, but it is possible with 2 binary v
44,625
Confusion about interpreting log likelihood (and likelihood ratio test) output
We take the log-likelihood because each case in the dataset gets a likelihood, and the log-likelihood is the product of these likelihoods. But each of these likelihoods is less than 1, and when you multiply lots of numbers less than 1 together you tend to get really, really small numbers. Nothing wrong with those reall...
Confusion about interpreting log likelihood (and likelihood ratio test) output
We take the log-likelihood because each case in the dataset gets a likelihood, and the log-likelihood is the product of these likelihoods. But each of these likelihoods is less than 1, and when you mu
Confusion about interpreting log likelihood (and likelihood ratio test) output We take the log-likelihood because each case in the dataset gets a likelihood, and the log-likelihood is the product of these likelihoods. But each of these likelihoods is less than 1, and when you multiply lots of numbers less than 1 togeth...
Confusion about interpreting log likelihood (and likelihood ratio test) output We take the log-likelihood because each case in the dataset gets a likelihood, and the log-likelihood is the product of these likelihoods. But each of these likelihoods is less than 1, and when you mu
44,626
Confusion about interpreting log likelihood (and likelihood ratio test) output
We try to minimise the negative log-likelihood function, which is equivalent to maximising the log-likelihood function. The model with the lower negative log-likelihood value would be a better fit.
Confusion about interpreting log likelihood (and likelihood ratio test) output
We try to minimise the negative log-likelihood function, which is equivalent to maximising the log-likelihood function. The model with the lower negative log-likelihood value would be a better fit.
Confusion about interpreting log likelihood (and likelihood ratio test) output We try to minimise the negative log-likelihood function, which is equivalent to maximising the log-likelihood function. The model with the lower negative log-likelihood value would be a better fit.
Confusion about interpreting log likelihood (and likelihood ratio test) output We try to minimise the negative log-likelihood function, which is equivalent to maximising the log-likelihood function. The model with the lower negative log-likelihood value would be a better fit.
44,627
How did Generative Adversarial Networks get their name?
In GANs, there are two networks. The first network generates fake data. The second network is shown examples of both real data and fake data generated by the first network. Its goal is to determine whether its input is real or fake. The second network is trained to better distinguish real from fake data, and the first ...
How did Generative Adversarial Networks get their name?
In GANs, there are two networks. The first network generates fake data. The second network is shown examples of both real data and fake data generated by the first network. Its goal is to determine wh
How did Generative Adversarial Networks get their name? In GANs, there are two networks. The first network generates fake data. The second network is shown examples of both real data and fake data generated by the first network. Its goal is to determine whether its input is real or fake. The second network is trained t...
How did Generative Adversarial Networks get their name? In GANs, there are two networks. The first network generates fake data. The second network is shown examples of both real data and fake data generated by the first network. Its goal is to determine wh
44,628
How did Generative Adversarial Networks get their name?
From the paper that introduced GANs {1}: In the proposed adversarial nets framework, the generative model is pitted against an adversary: a discriminative model that learns to determine whether a sample is from the model distribution or the data distribution. The generative model can be thought of as analogous to...
How did Generative Adversarial Networks get their name?
From the paper that introduced GANs {1}: In the proposed adversarial nets framework, the generative model is pitted against an adversary: a discriminative model that learns to determine whether a
How did Generative Adversarial Networks get their name? From the paper that introduced GANs {1}: In the proposed adversarial nets framework, the generative model is pitted against an adversary: a discriminative model that learns to determine whether a sample is from the model distribution or the data distribution...
How did Generative Adversarial Networks get their name? From the paper that introduced GANs {1}: In the proposed adversarial nets framework, the generative model is pitted against an adversary: a discriminative model that learns to determine whether a
44,629
How to prove this decomposition of sum of squares?
I am afraid that the statement you show is wrong. Adding and subtracting $\bar{x}$: $$\sum_{i=1}^{n}(x_i-\mu)^2=\sum_{i=1}^{n}((x_i-\bar{x})-(\mu-\bar{x}))^2$$ Powering $(a-b)^2=a^2-2ab+b^2$: $$=\sum_{i=1}^{n}\left((x_i-\bar{x})^2-2(x_i-\bar{x})(\mu-\bar{x})+(\mu-\bar{x})^2\right)$$ Rearagning the sums $$=\sum_{i=1}^{n...
How to prove this decomposition of sum of squares?
I am afraid that the statement you show is wrong. Adding and subtracting $\bar{x}$: $$\sum_{i=1}^{n}(x_i-\mu)^2=\sum_{i=1}^{n}((x_i-\bar{x})-(\mu-\bar{x}))^2$$ Powering $(a-b)^2=a^2-2ab+b^2$: $$=\sum_
How to prove this decomposition of sum of squares? I am afraid that the statement you show is wrong. Adding and subtracting $\bar{x}$: $$\sum_{i=1}^{n}(x_i-\mu)^2=\sum_{i=1}^{n}((x_i-\bar{x})-(\mu-\bar{x}))^2$$ Powering $(a-b)^2=a^2-2ab+b^2$: $$=\sum_{i=1}^{n}\left((x_i-\bar{x})^2-2(x_i-\bar{x})(\mu-\bar{x})+(\mu-\bar{...
How to prove this decomposition of sum of squares? I am afraid that the statement you show is wrong. Adding and subtracting $\bar{x}$: $$\sum_{i=1}^{n}(x_i-\mu)^2=\sum_{i=1}^{n}((x_i-\bar{x})-(\mu-\bar{x}))^2$$ Powering $(a-b)^2=a^2-2ab+b^2$: $$=\sum_
44,630
How to prove this decomposition of sum of squares?
I see while I was writing this @KarelMacek (+1) gave a identical proof. \begin{align} &\sum_{i=1}^{n}(x_i-\mu)^2= \\ &\sum_{i=1}^{n}x_i^2-2\sum_{i=1}^{n}x_i\mu+\sum_{i=1}^{n}\mu^2=\\ &\sum_{i=1}^{n}x_i^2-2\mu\left(\sum_{i=1}^{n}x_i\right)+n\mu^2=\\ &\sum_{i=1}^{n}x_i^2-2\mu n\overline x+n\mu^2=\\ &\sum_{i=1}^{n}(x_i-\...
How to prove this decomposition of sum of squares?
I see while I was writing this @KarelMacek (+1) gave a identical proof. \begin{align} &\sum_{i=1}^{n}(x_i-\mu)^2= \\ &\sum_{i=1}^{n}x_i^2-2\sum_{i=1}^{n}x_i\mu+\sum_{i=1}^{n}\mu^2=\\ &\sum_{i=1}^{n}x
How to prove this decomposition of sum of squares? I see while I was writing this @KarelMacek (+1) gave a identical proof. \begin{align} &\sum_{i=1}^{n}(x_i-\mu)^2= \\ &\sum_{i=1}^{n}x_i^2-2\sum_{i=1}^{n}x_i\mu+\sum_{i=1}^{n}\mu^2=\\ &\sum_{i=1}^{n}x_i^2-2\mu\left(\sum_{i=1}^{n}x_i\right)+n\mu^2=\\ &\sum_{i=1}^{n}x_i^...
How to prove this decomposition of sum of squares? I see while I was writing this @KarelMacek (+1) gave a identical proof. \begin{align} &\sum_{i=1}^{n}(x_i-\mu)^2= \\ &\sum_{i=1}^{n}x_i^2-2\sum_{i=1}^{n}x_i\mu+\sum_{i=1}^{n}\mu^2=\\ &\sum_{i=1}^{n}x
44,631
Jaccard similarity coefficient vs. Point-wise mutual information coefficient
These two are quite different. Still, let us try to "bring them to a common denominator", to see the difference. Both Jaccard and PMI could be extended to a continuous data case, but we'll observe the primeval binary data case. Using a,b,c,d convention of the 4-fold table, as here, Y 1 0 ...
Jaccard similarity coefficient vs. Point-wise mutual information coefficient
These two are quite different. Still, let us try to "bring them to a common denominator", to see the difference. Both Jaccard and PMI could be extended to a continuous data case, but we'll observe the
Jaccard similarity coefficient vs. Point-wise mutual information coefficient These two are quite different. Still, let us try to "bring them to a common denominator", to see the difference. Both Jaccard and PMI could be extended to a continuous data case, but we'll observe the primeval binary data case. Using a,b,c,d c...
Jaccard similarity coefficient vs. Point-wise mutual information coefficient These two are quite different. Still, let us try to "bring them to a common denominator", to see the difference. Both Jaccard and PMI could be extended to a continuous data case, but we'll observe the
44,632
Jaccard similarity coefficient vs. Point-wise mutual information coefficient
To supplement the top answer: You want high Jaccard similarity, if you care about whether the two items co-occur frequently. You want high PMI, if you care about how bigger than random the chances that the two items co-occur. For two items with low probabilities and moderate co-occurrence, Jaccard will have really low ...
Jaccard similarity coefficient vs. Point-wise mutual information coefficient
To supplement the top answer: You want high Jaccard similarity, if you care about whether the two items co-occur frequently. You want high PMI, if you care about how bigger than random the chances tha
Jaccard similarity coefficient vs. Point-wise mutual information coefficient To supplement the top answer: You want high Jaccard similarity, if you care about whether the two items co-occur frequently. You want high PMI, if you care about how bigger than random the chances that the two items co-occur. For two items wit...
Jaccard similarity coefficient vs. Point-wise mutual information coefficient To supplement the top answer: You want high Jaccard similarity, if you care about whether the two items co-occur frequently. You want high PMI, if you care about how bigger than random the chances tha
44,633
Is the normal distribution a better approximation to the binomial distribution with proportions near or far from 0.5?
NOTE: Following up on @whuber's comment, I realized that I was imposing aesthetic constraints on the plotting of the values in terms of the breaks options in hist(). Running the same simulation with the same seed, a symmetrical illustration is now generated. I believe this addresses the issue. You may want to refer to...
Is the normal distribution a better approximation to the binomial distribution with proportions near
NOTE: Following up on @whuber's comment, I realized that I was imposing aesthetic constraints on the plotting of the values in terms of the breaks options in hist(). Running the same simulation with t
Is the normal distribution a better approximation to the binomial distribution with proportions near or far from 0.5? NOTE: Following up on @whuber's comment, I realized that I was imposing aesthetic constraints on the plotting of the values in terms of the breaks options in hist(). Running the same simulation with the...
Is the normal distribution a better approximation to the binomial distribution with proportions near NOTE: Following up on @whuber's comment, I realized that I was imposing aesthetic constraints on the plotting of the values in terms of the breaks options in hist(). Running the same simulation with t
44,634
Is the normal distribution a better approximation to the binomial distribution with proportions near or far from 0.5?
The rule of thumb says that both $N\pi $ and $N(1-\pi)$ should be $>10$. For $\pi=.5$ this demands $N>20$. But for $\pi=0.2$ (as well as for $\pi=0.8$) it demands $N>50$. So we see that the "approximability" kicks in a lot earlier when $p=.5$.
Is the normal distribution a better approximation to the binomial distribution with proportions near
The rule of thumb says that both $N\pi $ and $N(1-\pi)$ should be $>10$. For $\pi=.5$ this demands $N>20$. But for $\pi=0.2$ (as well as for $\pi=0.8$) it demands $N>50$. So we see that the "approxima
Is the normal distribution a better approximation to the binomial distribution with proportions near or far from 0.5? The rule of thumb says that both $N\pi $ and $N(1-\pi)$ should be $>10$. For $\pi=.5$ this demands $N>20$. But for $\pi=0.2$ (as well as for $\pi=0.8$) it demands $N>50$. So we see that the "approximabi...
Is the normal distribution a better approximation to the binomial distribution with proportions near The rule of thumb says that both $N\pi $ and $N(1-\pi)$ should be $>10$. For $\pi=.5$ this demands $N>20$. But for $\pi=0.2$ (as well as for $\pi=0.8$) it demands $N>50$. So we see that the "approxima
44,635
Unconditional mean and variance of a stationary VAR(1) model
Taking the variance of both sides of the equation $$ y_t = \nu + A_1 y_{t-1} + u_t $$ leads to $$ \operatorname{Var}y_t = A_1\operatorname{Var}y_{t-1}A_1^T+\Sigma_u. $$ Stationary implies that $\operatorname{Var}y_t =\operatorname{Var}y_{t-1}=\Gamma_0$ so you need to solve the matrix equation $$ \Gamma_0 = A_1\Gamma_0 ...
Unconditional mean and variance of a stationary VAR(1) model
Taking the variance of both sides of the equation $$ y_t = \nu + A_1 y_{t-1} + u_t $$ leads to $$ \operatorname{Var}y_t = A_1\operatorname{Var}y_{t-1}A_1^T+\Sigma_u. $$ Stationary implies that $\opera
Unconditional mean and variance of a stationary VAR(1) model Taking the variance of both sides of the equation $$ y_t = \nu + A_1 y_{t-1} + u_t $$ leads to $$ \operatorname{Var}y_t = A_1\operatorname{Var}y_{t-1}A_1^T+\Sigma_u. $$ Stationary implies that $\operatorname{Var}y_t =\operatorname{Var}y_{t-1}=\Gamma_0$ so you...
Unconditional mean and variance of a stationary VAR(1) model Taking the variance of both sides of the equation $$ y_t = \nu + A_1 y_{t-1} + u_t $$ leads to $$ \operatorname{Var}y_t = A_1\operatorname{Var}y_{t-1}A_1^T+\Sigma_u. $$ Stationary implies that $\opera
44,636
Unconditional mean and variance of a stationary VAR(1) model
According to Lütkepohl (2005), p. 14-15, if we have a $K$-variate VAR(1) process of the form $$ y_t = \nu + A_1 y_{t-1} + u_t, $$ then the unconditional mean is $$ (I_K-A_1)^{-1}\nu $$ (where $I_K$ is an identity matrix of dimension $K\times K$) and the unconditional covariance for lag $h$ (i.e. $\text{Cov}(y_t,y_{t-h...
Unconditional mean and variance of a stationary VAR(1) model
According to Lütkepohl (2005), p. 14-15, if we have a $K$-variate VAR(1) process of the form $$ y_t = \nu + A_1 y_{t-1} + u_t, $$ then the unconditional mean is $$ (I_K-A_1)^{-1}\nu $$ (where $I_K$ i
Unconditional mean and variance of a stationary VAR(1) model According to Lütkepohl (2005), p. 14-15, if we have a $K$-variate VAR(1) process of the form $$ y_t = \nu + A_1 y_{t-1} + u_t, $$ then the unconditional mean is $$ (I_K-A_1)^{-1}\nu $$ (where $I_K$ is an identity matrix of dimension $K\times K$) and the unco...
Unconditional mean and variance of a stationary VAR(1) model According to Lütkepohl (2005), p. 14-15, if we have a $K$-variate VAR(1) process of the form $$ y_t = \nu + A_1 y_{t-1} + u_t, $$ then the unconditional mean is $$ (I_K-A_1)^{-1}\nu $$ (where $I_K$ i
44,637
Name for 1 minus Bernoulli variable
As @Tim has aleady shown in his answer, if $X$ is a Bernoulli random variable, then so is $Y = 1-X$. I would call $Y$ the "Complementary Bernoulli random variable" to $X$. I don't know that I've ever heard it called that, or anything else, but if I needed a short and sweet name, that would be it. Edit: I guess it hasn'...
Name for 1 minus Bernoulli variable
As @Tim has aleady shown in his answer, if $X$ is a Bernoulli random variable, then so is $Y = 1-X$. I would call $Y$ the "Complementary Bernoulli random variable" to $X$. I don't know that I've ever
Name for 1 minus Bernoulli variable As @Tim has aleady shown in his answer, if $X$ is a Bernoulli random variable, then so is $Y = 1-X$. I would call $Y$ the "Complementary Bernoulli random variable" to $X$. I don't know that I've ever heard it called that, or anything else, but if I needed a short and sweet name, that...
Name for 1 minus Bernoulli variable As @Tim has aleady shown in his answer, if $X$ is a Bernoulli random variable, then so is $Y = 1-X$. I would call $Y$ the "Complementary Bernoulli random variable" to $X$. I don't know that I've ever
44,638
Name for 1 minus Bernoulli variable
It is still a Bernoulli variable, for example if $Y = 1-X$ where $X \sim \mathrm{Bern}(p)$, then $$ Y \sim \mathrm{Bern}(1-p) $$ moreover $$ \Bbb{1}_{Y=0} \sim \mathrm{Bern}(p) $$ where $\Bbb{1}$ is an indicator function, so it is just a matter of labeling the categories. Notice that the labeling is arbitrary since it ...
Name for 1 minus Bernoulli variable
It is still a Bernoulli variable, for example if $Y = 1-X$ where $X \sim \mathrm{Bern}(p)$, then $$ Y \sim \mathrm{Bern}(1-p) $$ moreover $$ \Bbb{1}_{Y=0} \sim \mathrm{Bern}(p) $$ where $\Bbb{1}$ is a
Name for 1 minus Bernoulli variable It is still a Bernoulli variable, for example if $Y = 1-X$ where $X \sim \mathrm{Bern}(p)$, then $$ Y \sim \mathrm{Bern}(1-p) $$ moreover $$ \Bbb{1}_{Y=0} \sim \mathrm{Bern}(p) $$ where $\Bbb{1}$ is an indicator function, so it is just a matter of labeling the categories. Notice that...
Name for 1 minus Bernoulli variable It is still a Bernoulli variable, for example if $Y = 1-X$ where $X \sim \mathrm{Bern}(p)$, then $$ Y \sim \mathrm{Bern}(1-p) $$ moreover $$ \Bbb{1}_{Y=0} \sim \mathrm{Bern}(p) $$ where $\Bbb{1}$ is a
44,639
a regression through the origin
Here is an illustration that simulates $y$ and $x$ independently of each other so that the true slope is zero. The mean of $y$ is nonzero, such that the true intercept is also nonzero. The LS line without intercept must start at $(0,0)$ without intercept, and will try to "catch up" with the data points as quickly as po...
a regression through the origin
Here is an illustration that simulates $y$ and $x$ independently of each other so that the true slope is zero. The mean of $y$ is nonzero, such that the true intercept is also nonzero. The LS line wit
a regression through the origin Here is an illustration that simulates $y$ and $x$ independently of each other so that the true slope is zero. The mean of $y$ is nonzero, such that the true intercept is also nonzero. The LS line without intercept must start at $(0,0)$ without intercept, and will try to "catch up" with ...
a regression through the origin Here is an illustration that simulates $y$ and $x$ independently of each other so that the true slope is zero. The mean of $y$ is nonzero, such that the true intercept is also nonzero. The LS line wit
44,640
a regression through the origin
Basically, to force a regression through zero the statistical software will enter in an infinite amount of data points at (0,0). This makes the normal R^2 formula useless, and a different R^2 formula is used. The result of this different R^2 formula is always very high. You can go to this link to get more specifics- ht...
a regression through the origin
Basically, to force a regression through zero the statistical software will enter in an infinite amount of data points at (0,0). This makes the normal R^2 formula useless, and a different R^2 formula
a regression through the origin Basically, to force a regression through zero the statistical software will enter in an infinite amount of data points at (0,0). This makes the normal R^2 formula useless, and a different R^2 formula is used. The result of this different R^2 formula is always very high. You can go to thi...
a regression through the origin Basically, to force a regression through zero the statistical software will enter in an infinite amount of data points at (0,0). This makes the normal R^2 formula useless, and a different R^2 formula
44,641
Correlation of signs of a jointly Gaussian RV
For convenience let's call $\operatorname{sgn}(X_1),\operatorname{sgn}(X_2)$ as $S_1$ and $S_2$, respectively. There are only $9$ possible combinations of $(S_1,S_2)$: $(\pm1,\pm1)$, and at least one of the $S$ being $0$. Now, since we are looking for $E[S_1S_2]$, ignoring the states of $S=0$ will not affect the resul...
Correlation of signs of a jointly Gaussian RV
For convenience let's call $\operatorname{sgn}(X_1),\operatorname{sgn}(X_2)$ as $S_1$ and $S_2$, respectively. There are only $9$ possible combinations of $(S_1,S_2)$: $(\pm1,\pm1)$, and at least one
Correlation of signs of a jointly Gaussian RV For convenience let's call $\operatorname{sgn}(X_1),\operatorname{sgn}(X_2)$ as $S_1$ and $S_2$, respectively. There are only $9$ possible combinations of $(S_1,S_2)$: $(\pm1,\pm1)$, and at least one of the $S$ being $0$. Now, since we are looking for $E[S_1S_2]$, ignoring...
Correlation of signs of a jointly Gaussian RV For convenience let's call $\operatorname{sgn}(X_1),\operatorname{sgn}(X_2)$ as $S_1$ and $S_2$, respectively. There are only $9$ possible combinations of $(S_1,S_2)$: $(\pm1,\pm1)$, and at least one
44,642
Correlation of signs of a jointly Gaussian RV
$$\mathbb{E}[ \text{sign}(X_1) \text{sign}(X_2)] = 1 * (P(X_1 \ge 0,X_2 \ge 0) + P(X_1 \le 0,X_2 \le 0)) - (P(X_1 \ge 0,X_2 \le 0) + P(X_1 \le 0,X_2 \ge 0))$$ which in turn $$= 2P(X_1 \ge 0,X_2 \ge 0) - 2P(X_1 \ge 0,X_2 \le 0)$$ by symmetry. Plugging in the Bivariate Normal density, this evaluates (integrates) to $\fr...
Correlation of signs of a jointly Gaussian RV
$$\mathbb{E}[ \text{sign}(X_1) \text{sign}(X_2)] = 1 * (P(X_1 \ge 0,X_2 \ge 0) + P(X_1 \le 0,X_2 \le 0)) - (P(X_1 \ge 0,X_2 \le 0) + P(X_1 \le 0,X_2 \ge 0))$$ which in turn $$= 2P(X_1 \ge 0,X_2 \ge 0
Correlation of signs of a jointly Gaussian RV $$\mathbb{E}[ \text{sign}(X_1) \text{sign}(X_2)] = 1 * (P(X_1 \ge 0,X_2 \ge 0) + P(X_1 \le 0,X_2 \le 0)) - (P(X_1 \ge 0,X_2 \le 0) + P(X_1 \le 0,X_2 \ge 0))$$ which in turn $$= 2P(X_1 \ge 0,X_2 \ge 0) - 2P(X_1 \ge 0,X_2 \le 0)$$ by symmetry. Plugging in the Bivariate Norma...
Correlation of signs of a jointly Gaussian RV $$\mathbb{E}[ \text{sign}(X_1) \text{sign}(X_2)] = 1 * (P(X_1 \ge 0,X_2 \ge 0) + P(X_1 \le 0,X_2 \le 0)) - (P(X_1 \ge 0,X_2 \le 0) + P(X_1 \le 0,X_2 \ge 0))$$ which in turn $$= 2P(X_1 \ge 0,X_2 \ge 0
44,643
Standard Error for a Parameter in Ordinary Least Squares [duplicate]
In matrix notation we have data $\left (\mathbf y, \mathbf X\right)$ and we consider the model $$\mathbf y = \mathbf X\beta + \mathbf u$$ where for the moment we only assume that the regressor matrix contains a series of ones, so that we can safely assume that the "error term" $\mathbf u$ has zero mean. We do not as ye...
Standard Error for a Parameter in Ordinary Least Squares [duplicate]
In matrix notation we have data $\left (\mathbf y, \mathbf X\right)$ and we consider the model $$\mathbf y = \mathbf X\beta + \mathbf u$$ where for the moment we only assume that the regressor matrix
Standard Error for a Parameter in Ordinary Least Squares [duplicate] In matrix notation we have data $\left (\mathbf y, \mathbf X\right)$ and we consider the model $$\mathbf y = \mathbf X\beta + \mathbf u$$ where for the moment we only assume that the regressor matrix contains a series of ones, so that we can safely as...
Standard Error for a Parameter in Ordinary Least Squares [duplicate] In matrix notation we have data $\left (\mathbf y, \mathbf X\right)$ and we consider the model $$\mathbf y = \mathbf X\beta + \mathbf u$$ where for the moment we only assume that the regressor matrix
44,644
How to choose between logit, probit or linear probability model?
Modeling a dichotomous outcome using linear regression is a big no-no. The error terms will not be normally distributed, there will be heteroskedasticity, and predicted values will fall outside the logical boundaries of 0 and 1. Logit and probit differ in the assumption of the underlying distribution. Logit assumes t...
How to choose between logit, probit or linear probability model?
Modeling a dichotomous outcome using linear regression is a big no-no. The error terms will not be normally distributed, there will be heteroskedasticity, and predicted values will fall outside the l
How to choose between logit, probit or linear probability model? Modeling a dichotomous outcome using linear regression is a big no-no. The error terms will not be normally distributed, there will be heteroskedasticity, and predicted values will fall outside the logical boundaries of 0 and 1. Logit and probit differ i...
How to choose between logit, probit or linear probability model? Modeling a dichotomous outcome using linear regression is a big no-no. The error terms will not be normally distributed, there will be heteroskedasticity, and predicted values will fall outside the l
44,645
How to choose between logit, probit or linear probability model?
Following the response of whauser, I would also add that it depends on your data. I learnt from my professor that: If we are dealing with spatial data of high dimensionality in our fixed effects, it would be better to use LPM to minimize bias (and then use HAC correction), because logic and probit suffer from « incide...
How to choose between logit, probit or linear probability model?
Following the response of whauser, I would also add that it depends on your data. I learnt from my professor that: If we are dealing with spatial data of high dimensionality in our fixed effects, it w
How to choose between logit, probit or linear probability model? Following the response of whauser, I would also add that it depends on your data. I learnt from my professor that: If we are dealing with spatial data of high dimensionality in our fixed effects, it would be better to use LPM to minimize bias (and then us...
How to choose between logit, probit or linear probability model? Following the response of whauser, I would also add that it depends on your data. I learnt from my professor that: If we are dealing with spatial data of high dimensionality in our fixed effects, it w
44,646
Can p-value be greater than 1?
The $p$ value, as explained very nicely in this post by @fcop is not the probability of making a type I error, but the probability of getting a value for a test statistic higher than the one we got, under the NULL hypothesis. We have a fixed type I error decided upon whereby we are ready to accept only a certain risk ...
Can p-value be greater than 1?
The $p$ value, as explained very nicely in this post by @fcop is not the probability of making a type I error, but the probability of getting a value for a test statistic higher than the one we got, u
Can p-value be greater than 1? The $p$ value, as explained very nicely in this post by @fcop is not the probability of making a type I error, but the probability of getting a value for a test statistic higher than the one we got, under the NULL hypothesis. We have a fixed type I error decided upon whereby we are ready...
Can p-value be greater than 1? The $p$ value, as explained very nicely in this post by @fcop is not the probability of making a type I error, but the probability of getting a value for a test statistic higher than the one we got, u
44,647
Variable Selection Techniques for Multivariate Multiple Regression
Roman Kh is correct to warn you against ever using stepwise approaches. One of the best discussions of their pitfalls is Peter Flom's paper Stop Using Stepwise http://www.lexjansen.com/pnwsug/2008/DavidCassell-StoppingStepwise.pdf That said, every statistician and their brother has a paper or approach to variable selec...
Variable Selection Techniques for Multivariate Multiple Regression
Roman Kh is correct to warn you against ever using stepwise approaches. One of the best discussions of their pitfalls is Peter Flom's paper Stop Using Stepwise http://www.lexjansen.com/pnwsug/2008/Dav
Variable Selection Techniques for Multivariate Multiple Regression Roman Kh is correct to warn you against ever using stepwise approaches. One of the best discussions of their pitfalls is Peter Flom's paper Stop Using Stepwise http://www.lexjansen.com/pnwsug/2008/DavidCassell-StoppingStepwise.pdf That said, every stati...
Variable Selection Techniques for Multivariate Multiple Regression Roman Kh is correct to warn you against ever using stepwise approaches. One of the best discussions of their pitfalls is Peter Flom's paper Stop Using Stepwise http://www.lexjansen.com/pnwsug/2008/Dav
44,648
Variable Selection Techniques for Multivariate Multiple Regression
The well-known textbook "Introduction to Statistical Learning" has a nice treatment of this subject. Chapter 6 in this free PDF is easy to read. The structure of the chapter looks like this:
Variable Selection Techniques for Multivariate Multiple Regression
The well-known textbook "Introduction to Statistical Learning" has a nice treatment of this subject. Chapter 6 in this free PDF is easy to read. The structure of the chapter looks like this:
Variable Selection Techniques for Multivariate Multiple Regression The well-known textbook "Introduction to Statistical Learning" has a nice treatment of this subject. Chapter 6 in this free PDF is easy to read. The structure of the chapter looks like this:
Variable Selection Techniques for Multivariate Multiple Regression The well-known textbook "Introduction to Statistical Learning" has a nice treatment of this subject. Chapter 6 in this free PDF is easy to read. The structure of the chapter looks like this:
44,649
Variable Selection Techniques for Multivariate Multiple Regression
Stepwise regressions are controversial and might lead to model misspecification. Other techniques are Lasso and Ridge regression, as well as Least angle regression.
Variable Selection Techniques for Multivariate Multiple Regression
Stepwise regressions are controversial and might lead to model misspecification. Other techniques are Lasso and Ridge regression, as well as Least angle regression.
Variable Selection Techniques for Multivariate Multiple Regression Stepwise regressions are controversial and might lead to model misspecification. Other techniques are Lasso and Ridge regression, as well as Least angle regression.
Variable Selection Techniques for Multivariate Multiple Regression Stepwise regressions are controversial and might lead to model misspecification. Other techniques are Lasso and Ridge regression, as well as Least angle regression.
44,650
Variable Selection Techniques for Multivariate Multiple Regression
Partial Least Squares(PLS) is designed to take multivariate/univariate response variables. check out the "pls" r package for more details.
Variable Selection Techniques for Multivariate Multiple Regression
Partial Least Squares(PLS) is designed to take multivariate/univariate response variables. check out the "pls" r package for more details.
Variable Selection Techniques for Multivariate Multiple Regression Partial Least Squares(PLS) is designed to take multivariate/univariate response variables. check out the "pls" r package for more details.
Variable Selection Techniques for Multivariate Multiple Regression Partial Least Squares(PLS) is designed to take multivariate/univariate response variables. check out the "pls" r package for more details.
44,651
Variable Selection Techniques for Multivariate Multiple Regression
GUESS or the corresponding R package R2GUESS is available for variable selection of multivariate response. The reference is: Liquet, B., Bottolo, L., Campanella, G., Richardson, S., & Chadeau-Hyam, M. (2016). R2GUESS : A Graphics Processing Unit-Based R Package for Bayesian Variable Selection Regression of Multivariate...
Variable Selection Techniques for Multivariate Multiple Regression
GUESS or the corresponding R package R2GUESS is available for variable selection of multivariate response. The reference is: Liquet, B., Bottolo, L., Campanella, G., Richardson, S., & Chadeau-Hyam, M.
Variable Selection Techniques for Multivariate Multiple Regression GUESS or the corresponding R package R2GUESS is available for variable selection of multivariate response. The reference is: Liquet, B., Bottolo, L., Campanella, G., Richardson, S., & Chadeau-Hyam, M. (2016). R2GUESS : A Graphics Processing Unit-Based R...
Variable Selection Techniques for Multivariate Multiple Regression GUESS or the corresponding R package R2GUESS is available for variable selection of multivariate response. The reference is: Liquet, B., Bottolo, L., Campanella, G., Richardson, S., & Chadeau-Hyam, M.
44,652
Extreme values in the data
A key distinction: mismeasurement or extreme events? Are extreme values due to extreme events or error? You generally want to include the former but exclude the latter. You don't want your results driven by error. More generally, you don't want results driven by bizarre, weird behavior that's not related to what you're...
Extreme values in the data
A key distinction: mismeasurement or extreme events? Are extreme values due to extreme events or error? You generally want to include the former but exclude the latter. You don't want your results dri
Extreme values in the data A key distinction: mismeasurement or extreme events? Are extreme values due to extreme events or error? You generally want to include the former but exclude the latter. You don't want your results driven by error. More generally, you don't want results driven by bizarre, weird behavior that's...
Extreme values in the data A key distinction: mismeasurement or extreme events? Are extreme values due to extreme events or error? You generally want to include the former but exclude the latter. You don't want your results dri
44,653
Extreme values in the data
First off, you should check the nature of your outliers. Are they within the natural range of your variable? E.g. you have measured weight for 100 people. Most would be between 50kg - 120kg. If you then have an outlier at say 200kg, ask yourself, is this possible? Yes, it could be a very heavy person. However, if you...
Extreme values in the data
First off, you should check the nature of your outliers. Are they within the natural range of your variable? E.g. you have measured weight for 100 people. Most would be between 50kg - 120kg. If you t
Extreme values in the data First off, you should check the nature of your outliers. Are they within the natural range of your variable? E.g. you have measured weight for 100 people. Most would be between 50kg - 120kg. If you then have an outlier at say 200kg, ask yourself, is this possible? Yes, it could be a very hea...
Extreme values in the data First off, you should check the nature of your outliers. Are they within the natural range of your variable? E.g. you have measured weight for 100 people. Most would be between 50kg - 120kg. If you t
44,654
Extreme values in the data
Typically, it is better to remove this values, called outliers. But I would warn you not to use OLS regression in order to detect such outliers: you will probably construct the wrong model and the outliers will be probably wrong. Instead of it, use robust linear regression model and calculate standardized residuals for...
Extreme values in the data
Typically, it is better to remove this values, called outliers. But I would warn you not to use OLS regression in order to detect such outliers: you will probably construct the wrong model and the out
Extreme values in the data Typically, it is better to remove this values, called outliers. But I would warn you not to use OLS regression in order to detect such outliers: you will probably construct the wrong model and the outliers will be probably wrong. Instead of it, use robust linear regression model and calculate...
Extreme values in the data Typically, it is better to remove this values, called outliers. But I would warn you not to use OLS regression in order to detect such outliers: you will probably construct the wrong model and the out
44,655
Expected value of sum of cards
By your definition, you have $16$ cards ($10$, $\text{J}$, $\text{K}$, $\text{Q}$) that are worth $10$ points, so with probability $16/52$ you get $10$ points in a single draw. Since $9+\text{anything}=10$, then if we take into consideration that there is $4$ nines, than we instantly know that with probability greater ...
Expected value of sum of cards
By your definition, you have $16$ cards ($10$, $\text{J}$, $\text{K}$, $\text{Q}$) that are worth $10$ points, so with probability $16/52$ you get $10$ points in a single draw. Since $9+\text{anything
Expected value of sum of cards By your definition, you have $16$ cards ($10$, $\text{J}$, $\text{K}$, $\text{Q}$) that are worth $10$ points, so with probability $16/52$ you get $10$ points in a single draw. Since $9+\text{anything}=10$, then if we take into consideration that there is $4$ nines, than we instantly know...
Expected value of sum of cards By your definition, you have $16$ cards ($10$, $\text{J}$, $\text{K}$, $\text{Q}$) that are worth $10$ points, so with probability $16/52$ you get $10$ points in a single draw. Since $9+\text{anything
44,656
Expected value of sum of cards
The question asks for the "mean of the sum of points." Because the target is 10 and no value exceeds 10, this is the mean of a distribution defined on the ten integers $10, 11, \ldots, 10+10-1$. It takes about the same amount of computing power to work out this distribution, exactly, as it does to perform a small simu...
Expected value of sum of cards
The question asks for the "mean of the sum of points." Because the target is 10 and no value exceeds 10, this is the mean of a distribution defined on the ten integers $10, 11, \ldots, 10+10-1$. It t
Expected value of sum of cards The question asks for the "mean of the sum of points." Because the target is 10 and no value exceeds 10, this is the mean of a distribution defined on the ten integers $10, 11, \ldots, 10+10-1$. It takes about the same amount of computing power to work out this distribution, exactly, as ...
Expected value of sum of cards The question asks for the "mean of the sum of points." Because the target is 10 and no value exceeds 10, this is the mean of a distribution defined on the ten integers $10, 11, \ldots, 10+10-1$. It t
44,657
Expected value of sum of cards
I ran the simulation is C# .NET I am getting 12.75x consistently private static decimal AvgSumToTen() { Int32 loops = Int32.MaxValue / 10; //loops = 100; Random rand = new Random(); int thisSum; ulong ttl = 0; int b; int bTenRaw; int bTen; HashSet <int> values = new HashSet<int>();...
Expected value of sum of cards
I ran the simulation is C# .NET I am getting 12.75x consistently private static decimal AvgSumToTen() { Int32 loops = Int32.MaxValue / 10; //loops = 100; Random rand = new Random();
Expected value of sum of cards I ran the simulation is C# .NET I am getting 12.75x consistently private static decimal AvgSumToTen() { Int32 loops = Int32.MaxValue / 10; //loops = 100; Random rand = new Random(); int thisSum; ulong ttl = 0; int b; int bTenRaw; int bTen; HashSet <in...
Expected value of sum of cards I ran the simulation is C# .NET I am getting 12.75x consistently private static decimal AvgSumToTen() { Int32 loops = Int32.MaxValue / 10; //loops = 100; Random rand = new Random();
44,658
Matrix inverse not able to be calculated while determinant is non-zero
My guess is that the numbers are too big (the determinant is large) and you're running into a computational problem. I was able to replicate your error by running: > X <- cbind(1,exp(rexp(100,rate=1/50))) > det(t(X) %*% X) [1] 5.156683e+126 > solve(t(X) %*% X) > Error in solve.default... The problem is numerical. Yo...
Matrix inverse not able to be calculated while determinant is non-zero
My guess is that the numbers are too big (the determinant is large) and you're running into a computational problem. I was able to replicate your error by running: > X <- cbind(1,exp(rexp(100,rate=1
Matrix inverse not able to be calculated while determinant is non-zero My guess is that the numbers are too big (the determinant is large) and you're running into a computational problem. I was able to replicate your error by running: > X <- cbind(1,exp(rexp(100,rate=1/50))) > det(t(X) %*% X) [1] 5.156683e+126 > solv...
Matrix inverse not able to be calculated while determinant is non-zero My guess is that the numbers are too big (the determinant is large) and you're running into a computational problem. I was able to replicate your error by running: > X <- cbind(1,exp(rexp(100,rate=1
44,659
Matrix inverse not able to be calculated while determinant is non-zero
The method for determinant is different than the method for inverting a matrix. The determinant uses a lower upper decomposition. The determinant of a product is the product of determinants. The L is approximately very small and the U is approximately very large. At 16 point digit precision the very small number is ro...
Matrix inverse not able to be calculated while determinant is non-zero
The method for determinant is different than the method for inverting a matrix. The determinant uses a lower upper decomposition. The determinant of a product is the product of determinants. The L is
Matrix inverse not able to be calculated while determinant is non-zero The method for determinant is different than the method for inverting a matrix. The determinant uses a lower upper decomposition. The determinant of a product is the product of determinants. The L is approximately very small and the U is approximat...
Matrix inverse not able to be calculated while determinant is non-zero The method for determinant is different than the method for inverting a matrix. The determinant uses a lower upper decomposition. The determinant of a product is the product of determinants. The L is
44,660
Matrix inverse not able to be calculated while determinant is non-zero
It looks like there's a similar question here, and I'd suggest a similar exploration. What is the condition number of your matrix? Your matrix may be nearly singular, although I suspect that's unlikely. What about the scale of $X$? What are its max values? Your determinant may be overflowing due to scaling issues, in w...
Matrix inverse not able to be calculated while determinant is non-zero
It looks like there's a similar question here, and I'd suggest a similar exploration. What is the condition number of your matrix? Your matrix may be nearly singular, although I suspect that's unlikel
Matrix inverse not able to be calculated while determinant is non-zero It looks like there's a similar question here, and I'd suggest a similar exploration. What is the condition number of your matrix? Your matrix may be nearly singular, although I suspect that's unlikely. What about the scale of $X$? What are its max ...
Matrix inverse not able to be calculated while determinant is non-zero It looks like there's a similar question here, and I'd suggest a similar exploration. What is the condition number of your matrix? Your matrix may be nearly singular, although I suspect that's unlikel
44,661
Matrix inverse not able to be calculated while determinant is non-zero
Ok, I think det is the one that's misleading here. The "true" determinant is zero if the product of the eigenvalues of $X^TX$ is zero, which happens iff one of the individual eigenvalues is zero. Given computer arithmetic, the determinant will be computed as zero if one of the individual computed eigenvalues is exactly...
Matrix inverse not able to be calculated while determinant is non-zero
Ok, I think det is the one that's misleading here. The "true" determinant is zero if the product of the eigenvalues of $X^TX$ is zero, which happens iff one of the individual eigenvalues is zero. Give
Matrix inverse not able to be calculated while determinant is non-zero Ok, I think det is the one that's misleading here. The "true" determinant is zero if the product of the eigenvalues of $X^TX$ is zero, which happens iff one of the individual eigenvalues is zero. Given computer arithmetic, the determinant will be co...
Matrix inverse not able to be calculated while determinant is non-zero Ok, I think det is the one that's misleading here. The "true" determinant is zero if the product of the eigenvalues of $X^TX$ is zero, which happens iff one of the individual eigenvalues is zero. Give
44,662
Use of KL Divergence in practice
The Kullback-Leibler divergence is widely used in variational inference, where an optimization problem is constructed that aims at minimizing the KL-divergence between the intractable target distribution P and a sought element Q from a class of tractable distributions. The "direction" of the KL divergence then must be ...
Use of KL Divergence in practice
The Kullback-Leibler divergence is widely used in variational inference, where an optimization problem is constructed that aims at minimizing the KL-divergence between the intractable target distribut
Use of KL Divergence in practice The Kullback-Leibler divergence is widely used in variational inference, where an optimization problem is constructed that aims at minimizing the KL-divergence between the intractable target distribution P and a sought element Q from a class of tractable distributions. The "direction" o...
Use of KL Divergence in practice The Kullback-Leibler divergence is widely used in variational inference, where an optimization problem is constructed that aims at minimizing the KL-divergence between the intractable target distribut
44,663
Use of KL Divergence in practice
KL is widely used in machine learning. The two main ways I know compression: compressing a document is actually all about finding a good generative model for it. Given that the true model has probability distribution $p(x)$ while you use the approximate $q(x)$, you will have to use excess bits to encode a sequence of ...
Use of KL Divergence in practice
KL is widely used in machine learning. The two main ways I know compression: compressing a document is actually all about finding a good generative model for it. Given that the true model has probabi
Use of KL Divergence in practice KL is widely used in machine learning. The two main ways I know compression: compressing a document is actually all about finding a good generative model for it. Given that the true model has probability distribution $p(x)$ while you use the approximate $q(x)$, you will have to use exc...
Use of KL Divergence in practice KL is widely used in machine learning. The two main ways I know compression: compressing a document is actually all about finding a good generative model for it. Given that the true model has probabi
44,664
In multidimensional scaling, how can one determine dimensionality of a solution given a stress value?
In multidimensional scaling, how can one determine dimensionality of a solution given a stress value? Having a stress value it is not possible to determine the dimensionality of the dataset. At best, you can evaluate whether the value is low or high (this evaluation is also a bit problematic to me). From what I unde...
In multidimensional scaling, how can one determine dimensionality of a solution given a stress value
In multidimensional scaling, how can one determine dimensionality of a solution given a stress value? Having a stress value it is not possible to determine the dimensionality of the dataset. At best,
In multidimensional scaling, how can one determine dimensionality of a solution given a stress value? In multidimensional scaling, how can one determine dimensionality of a solution given a stress value? Having a stress value it is not possible to determine the dimensionality of the dataset. At best, you can evaluate ...
In multidimensional scaling, how can one determine dimensionality of a solution given a stress value In multidimensional scaling, how can one determine dimensionality of a solution given a stress value? Having a stress value it is not possible to determine the dimensionality of the dataset. At best,
44,665
In multidimensional scaling, how can one determine dimensionality of a solution given a stress value?
This is old, but one can compute BIC for every dimensionality, and choose the dimensionality with the lowest BIC. BIC is nice in that it accounts for inter-subject variability, model fit (stress), and parametric complexity. See Lee 2001: http://www.socsci.uci.edu/~mdlee/lee_mdsbic.pdf
In multidimensional scaling, how can one determine dimensionality of a solution given a stress value
This is old, but one can compute BIC for every dimensionality, and choose the dimensionality with the lowest BIC. BIC is nice in that it accounts for inter-subject variability, model fit (stress), and
In multidimensional scaling, how can one determine dimensionality of a solution given a stress value? This is old, but one can compute BIC for every dimensionality, and choose the dimensionality with the lowest BIC. BIC is nice in that it accounts for inter-subject variability, model fit (stress), and parametric comple...
In multidimensional scaling, how can one determine dimensionality of a solution given a stress value This is old, but one can compute BIC for every dimensionality, and choose the dimensionality with the lowest BIC. BIC is nice in that it accounts for inter-subject variability, model fit (stress), and
44,666
The advantages of recurrent neural network(RNN) over feed-forward neural network (MLP)
Theoretically, MLP can approximate any function, to an arbitrary precision, therefore there is no need for RNN. However that doesn't mean it is usable in a wild. Assuming we are talking about time series input, textbook answer would be that you can feed your time series in feed forward network, by having a input layer ...
The advantages of recurrent neural network(RNN) over feed-forward neural network (MLP)
Theoretically, MLP can approximate any function, to an arbitrary precision, therefore there is no need for RNN. However that doesn't mean it is usable in a wild. Assuming we are talking about time ser
The advantages of recurrent neural network(RNN) over feed-forward neural network (MLP) Theoretically, MLP can approximate any function, to an arbitrary precision, therefore there is no need for RNN. However that doesn't mean it is usable in a wild. Assuming we are talking about time series input, textbook answer would ...
The advantages of recurrent neural network(RNN) over feed-forward neural network (MLP) Theoretically, MLP can approximate any function, to an arbitrary precision, therefore there is no need for RNN. However that doesn't mean it is usable in a wild. Assuming we are talking about time ser
44,667
The advantages of recurrent neural network(RNN) over feed-forward neural network (MLP)
For purposes of discussion I'll assume you are using RNN for the typical use case of time series analysis, where the recurrence operation allows response to depend on a time-evolving state; for example the network can now detect changes over time. This is exactly the added capability you'd want a Recurrent Neural Netwo...
The advantages of recurrent neural network(RNN) over feed-forward neural network (MLP)
For purposes of discussion I'll assume you are using RNN for the typical use case of time series analysis, where the recurrence operation allows response to depend on a time-evolving state; for exampl
The advantages of recurrent neural network(RNN) over feed-forward neural network (MLP) For purposes of discussion I'll assume you are using RNN for the typical use case of time series analysis, where the recurrence operation allows response to depend on a time-evolving state; for example the network can now detect chan...
The advantages of recurrent neural network(RNN) over feed-forward neural network (MLP) For purposes of discussion I'll assume you are using RNN for the typical use case of time series analysis, where the recurrence operation allows response to depend on a time-evolving state; for exampl
44,668
Simple Log regression model in R
In my opinion, it's a good strategy to transform your data before performing linear regression model as your data show good log relation: > #generating the data > n=500 > x <- 1:n > set.seed(10) > y <- 1*log(x)-6+rnorm(n) > > #plot the data > plot(y~x) > > #fit log model > fit <- lm(y~log(x)) > #Results of the model ...
Simple Log regression model in R
In my opinion, it's a good strategy to transform your data before performing linear regression model as your data show good log relation: > #generating the data > n=500 > x <- 1:n > set.seed(10) > y <
Simple Log regression model in R In my opinion, it's a good strategy to transform your data before performing linear regression model as your data show good log relation: > #generating the data > n=500 > x <- 1:n > set.seed(10) > y <- 1*log(x)-6+rnorm(n) > > #plot the data > plot(y~x) > > #fit log model > fit <- lm(y...
Simple Log regression model in R In my opinion, it's a good strategy to transform your data before performing linear regression model as your data show good log relation: > #generating the data > n=500 > x <- 1:n > set.seed(10) > y <
44,669
Continuous probability distribution over integers?
By definition your distribution is discrete, because you can obtain all the values by counting. Your confusion may stem from two sources. One is that often people assume that discrete also means finite. This is not true, e.g. the Poisson distribution is defined on the non-negative integers, which is an infinite countab...
Continuous probability distribution over integers?
By definition your distribution is discrete, because you can obtain all the values by counting. Your confusion may stem from two sources. One is that often people assume that discrete also means finit
Continuous probability distribution over integers? By definition your distribution is discrete, because you can obtain all the values by counting. Your confusion may stem from two sources. One is that often people assume that discrete also means finite. This is not true, e.g. the Poisson distribution is defined on the ...
Continuous probability distribution over integers? By definition your distribution is discrete, because you can obtain all the values by counting. Your confusion may stem from two sources. One is that often people assume that discrete also means finit
44,670
How to plot clusters in more than 3 dimensions?
Calculate distances between data points, as appropriate to your problem. Then plot your data points in two dimensions instead of fifteen, preserving distances as far as possible. This is probably the key aspect of your question. Read up on multidimensional scaling (MDS) for this. Finally, color your points according to...
How to plot clusters in more than 3 dimensions?
Calculate distances between data points, as appropriate to your problem. Then plot your data points in two dimensions instead of fifteen, preserving distances as far as possible. This is probably the
How to plot clusters in more than 3 dimensions? Calculate distances between data points, as appropriate to your problem. Then plot your data points in two dimensions instead of fifteen, preserving distances as far as possible. This is probably the key aspect of your question. Read up on multidimensional scaling (MDS) f...
How to plot clusters in more than 3 dimensions? Calculate distances between data points, as appropriate to your problem. Then plot your data points in two dimensions instead of fifteen, preserving distances as far as possible. This is probably the
44,671
How to plot clusters in more than 3 dimensions?
I have successfully used a Self-Organizing Map (SOM) in the past for this task. It is a kind of Neural Network with some relation to Clustering, with significant advantages over them for some specific tasks. The main advantage (to me) is that it is an unsupervised method, meaning that you can apply it even with unknown...
How to plot clusters in more than 3 dimensions?
I have successfully used a Self-Organizing Map (SOM) in the past for this task. It is a kind of Neural Network with some relation to Clustering, with significant advantages over them for some specific
How to plot clusters in more than 3 dimensions? I have successfully used a Self-Organizing Map (SOM) in the past for this task. It is a kind of Neural Network with some relation to Clustering, with significant advantages over them for some specific tasks. The main advantage (to me) is that it is an unsupervised method,...
How to plot clusters in more than 3 dimensions? I have successfully used a Self-Organizing Map (SOM) in the past for this task. It is a kind of Neural Network with some relation to Clustering, with significant advantages over them for some specific
44,672
Random Forest Overfitting R
In random forests, overfitting is generally caused by over growing the trees as stated in one of the other answers is completely WRONG. The RF algorithm, by definition, requires fully grown unprunned trees. This is the case because RF can only reduce variance, not bias (where $error=bias+variance$). Since the bias o...
Random Forest Overfitting R
In random forests, overfitting is generally caused by over growing the trees as stated in one of the other answers is completely WRONG. The RF algorithm, by definition, requires fully grown unprunn
Random Forest Overfitting R In random forests, overfitting is generally caused by over growing the trees as stated in one of the other answers is completely WRONG. The RF algorithm, by definition, requires fully grown unprunned trees. This is the case because RF can only reduce variance, not bias (where $error=bias+...
Random Forest Overfitting R In random forests, overfitting is generally caused by over growing the trees as stated in one of the other answers is completely WRONG. The RF algorithm, by definition, requires fully grown unprunn
44,673
Random Forest Overfitting R
One reason that you Random Forest may be overfitting may be because you have a lot of redundant features or your features are heavily correlated. If lot of your features are redundant, then when you perform the splits in the nodes of the trees, the algorithm may often only choose very poor features, which makes your mo...
Random Forest Overfitting R
One reason that you Random Forest may be overfitting may be because you have a lot of redundant features or your features are heavily correlated. If lot of your features are redundant, then when you p
Random Forest Overfitting R One reason that you Random Forest may be overfitting may be because you have a lot of redundant features or your features are heavily correlated. If lot of your features are redundant, then when you perform the splits in the nodes of the trees, the algorithm may often only choose very poor f...
Random Forest Overfitting R One reason that you Random Forest may be overfitting may be because you have a lot of redundant features or your features are heavily correlated. If lot of your features are redundant, then when you p
44,674
Random Forest Overfitting R
In random forests, overfitting is generally caused by over growing the trees. Pruning the trees would also help. So, some parameters which you can optimize in the cForest argument are the ntree, mtry mtry is the number of variables the algorithm draws to build each tree. ntree is the total number of trees in the forest...
Random Forest Overfitting R
In random forests, overfitting is generally caused by over growing the trees. Pruning the trees would also help. So, some parameters which you can optimize in the cForest argument are the ntree, mtry
Random Forest Overfitting R In random forests, overfitting is generally caused by over growing the trees. Pruning the trees would also help. So, some parameters which you can optimize in the cForest argument are the ntree, mtry mtry is the number of variables the algorithm draws to build each tree. ntree is the total n...
Random Forest Overfitting R In random forests, overfitting is generally caused by over growing the trees. Pruning the trees would also help. So, some parameters which you can optimize in the cForest argument are the ntree, mtry
44,675
Variance-covariance matrix of logit with matrix computation
@Deep North: You are right, there should not be a 'n' The covariance matrix of a logistic regression is different from the covariance matrix of a linear regression. Linear Regression: Logistic Regression: Where W is diagonal matrix with is the probability of event=1 at the observation level
Variance-covariance matrix of logit with matrix computation
@Deep North: You are right, there should not be a 'n' The covariance matrix of a logistic regression is different from the covariance matrix of a linear regression. Linear Regression: Logistic Regre
Variance-covariance matrix of logit with matrix computation @Deep North: You are right, there should not be a 'n' The covariance matrix of a logistic regression is different from the covariance matrix of a linear regression. Linear Regression: Logistic Regression: Where W is diagonal matrix with is the probabilit...
Variance-covariance matrix of logit with matrix computation @Deep North: You are right, there should not be a 'n' The covariance matrix of a logistic regression is different from the covariance matrix of a linear regression. Linear Regression: Logistic Regre
44,676
Variance-covariance matrix of logit with matrix computation
The covariance for logistic regression from subra is correct. But $w_{ii}=\hat{\pi_i}(1-\hat{\pi_i})$. There should not have a $n_i$. ref. David W. Hosmer Applied Logistic Regression (2nd Editiion) p35 and p41 formular(2.8) I revised your program and compare with variance estimation, they are close but not the same. l...
Variance-covariance matrix of logit with matrix computation
The covariance for logistic regression from subra is correct. But $w_{ii}=\hat{\pi_i}(1-\hat{\pi_i})$. There should not have a $n_i$. ref. David W. Hosmer Applied Logistic Regression (2nd Editiion) p3
Variance-covariance matrix of logit with matrix computation The covariance for logistic regression from subra is correct. But $w_{ii}=\hat{\pi_i}(1-\hat{\pi_i})$. There should not have a $n_i$. ref. David W. Hosmer Applied Logistic Regression (2nd Editiion) p35 and p41 formular(2.8) I revised your program and compare w...
Variance-covariance matrix of logit with matrix computation The covariance for logistic regression from subra is correct. But $w_{ii}=\hat{\pi_i}(1-\hat{\pi_i})$. There should not have a $n_i$. ref. David W. Hosmer Applied Logistic Regression (2nd Editiion) p3
44,677
Variance-covariance matrix of logit with matrix computation
mydata <- read.csv("http://www.ats.ucla.edu/stat/data/binary.csv") mylogit <- glm(admit ~ gre + gpa, data = mydata, family = "binomial") X <- as.matrix(cbind(1, mydata[,c('gre','gpa')])) beta.hat <- as.matrix(coef(mylogit)) require(slam) p <- 1/(1+exp(-X %*% beta.hat)) V <- simple_triplet_zero_matrix(dim(X)[1]) diag(...
Variance-covariance matrix of logit with matrix computation
mydata <- read.csv("http://www.ats.ucla.edu/stat/data/binary.csv") mylogit <- glm(admit ~ gre + gpa, data = mydata, family = "binomial") X <- as.matrix(cbind(1, mydata[,c('gre','gpa')])) beta.hat <-
Variance-covariance matrix of logit with matrix computation mydata <- read.csv("http://www.ats.ucla.edu/stat/data/binary.csv") mylogit <- glm(admit ~ gre + gpa, data = mydata, family = "binomial") X <- as.matrix(cbind(1, mydata[,c('gre','gpa')])) beta.hat <- as.matrix(coef(mylogit)) require(slam) p <- 1/(1+exp(-X %*%...
Variance-covariance matrix of logit with matrix computation mydata <- read.csv("http://www.ats.ucla.edu/stat/data/binary.csv") mylogit <- glm(admit ~ gre + gpa, data = mydata, family = "binomial") X <- as.matrix(cbind(1, mydata[,c('gre','gpa')])) beta.hat <-
44,678
Alpha parameter in ridge regression is high
The L2 norm term in ridge regression is weighted by the regularization parameter alpha So, if the alpha value is 0, it means that it is just an Ordinary Least Squares Regression model. So, the larger is the alpha, the higher is the smoothness constraint. So, the smaller the value of alpha, the higher would be the magn...
Alpha parameter in ridge regression is high
The L2 norm term in ridge regression is weighted by the regularization parameter alpha So, if the alpha value is 0, it means that it is just an Ordinary Least Squares Regression model. So, the larger
Alpha parameter in ridge regression is high The L2 norm term in ridge regression is weighted by the regularization parameter alpha So, if the alpha value is 0, it means that it is just an Ordinary Least Squares Regression model. So, the larger is the alpha, the higher is the smoothness constraint. So, the smaller the ...
Alpha parameter in ridge regression is high The L2 norm term in ridge regression is weighted by the regularization parameter alpha So, if the alpha value is 0, it means that it is just an Ordinary Least Squares Regression model. So, the larger
44,679
Correlation coefficient is very small
A large amount of data can only help you to determine the correlation more precisely, it cannot reduce the correlation. The problem with your data seems rather to be that, yes, you have a slight positive relationship between your variables for a large number of useful votes, described by your fitted linear equation, bu...
Correlation coefficient is very small
A large amount of data can only help you to determine the correlation more precisely, it cannot reduce the correlation. The problem with your data seems rather to be that, yes, you have a slight posit
Correlation coefficient is very small A large amount of data can only help you to determine the correlation more precisely, it cannot reduce the correlation. The problem with your data seems rather to be that, yes, you have a slight positive relationship between your variables for a large number of useful votes, descri...
Correlation coefficient is very small A large amount of data can only help you to determine the correlation more precisely, it cannot reduce the correlation. The problem with your data seems rather to be that, yes, you have a slight posit
44,680
Correlation coefficient is very small
related ≠ correlated The garden-variety Pearson correlation $r$ measures the strength of linear association between two variables $x$ and $y$. The easiest way to think of it (in my opinion) is in terms of fitting a linear model of $y$ against $x$. If the model is a perfect fit (i.e. $y$ plotted against $x$ is a straigh...
Correlation coefficient is very small
related ≠ correlated The garden-variety Pearson correlation $r$ measures the strength of linear association between two variables $x$ and $y$. The easiest way to think of it (in my opinion) is in term
Correlation coefficient is very small related ≠ correlated The garden-variety Pearson correlation $r$ measures the strength of linear association between two variables $x$ and $y$. The easiest way to think of it (in my opinion) is in terms of fitting a linear model of $y$ against $x$. If the model is a perfect fit (i.e...
Correlation coefficient is very small related ≠ correlated The garden-variety Pearson correlation $r$ measures the strength of linear association between two variables $x$ and $y$. The easiest way to think of it (in my opinion) is in term
44,681
Correlation coefficient is very small
It is probably because the relationship you are seeing is not linear, and the usual correlation coefficient reflects linear relationship. As @A._Donda said transform useful votesand you will see a different picture.
Correlation coefficient is very small
It is probably because the relationship you are seeing is not linear, and the usual correlation coefficient reflects linear relationship. As @A._Donda said transform useful votesand you will see a dif
Correlation coefficient is very small It is probably because the relationship you are seeing is not linear, and the usual correlation coefficient reflects linear relationship. As @A._Donda said transform useful votesand you will see a different picture.
Correlation coefficient is very small It is probably because the relationship you are seeing is not linear, and the usual correlation coefficient reflects linear relationship. As @A._Donda said transform useful votesand you will see a dif
44,682
How to find conditional distributions from joint
Those distributions you call "marginal" are not marginal. They are conditional distributions because you wrote $x \mid y$. The marginal distribution of $X$, for example, is necessarily independent of the value of $Y$. To see how the conditional distribution is gamma, all you have to do is write $$f_{X \mid Y}(x) = \f...
How to find conditional distributions from joint
Those distributions you call "marginal" are not marginal. They are conditional distributions because you wrote $x \mid y$. The marginal distribution of $X$, for example, is necessarily independent o
How to find conditional distributions from joint Those distributions you call "marginal" are not marginal. They are conditional distributions because you wrote $x \mid y$. The marginal distribution of $X$, for example, is necessarily independent of the value of $Y$. To see how the conditional distribution is gamma, a...
How to find conditional distributions from joint Those distributions you call "marginal" are not marginal. They are conditional distributions because you wrote $x \mid y$. The marginal distribution of $X$, for example, is necessarily independent o
44,683
How to find conditional distributions from joint
The "trick" is to observe that $f(x\mid y)=f(x,y)/f(y)$ is proportional to $f(x,y)$ up to terms that do not involve $x$. Hence, $f(x\mid y)\propto x^2\exp(-(y^2+4)x)$, and this is the "kernel" of a $\mathrm{Gamma}(3,y^2+4)$ density. The other full conditional $f(y\mid x)$ is obtained similarly after completing the squa...
How to find conditional distributions from joint
The "trick" is to observe that $f(x\mid y)=f(x,y)/f(y)$ is proportional to $f(x,y)$ up to terms that do not involve $x$. Hence, $f(x\mid y)\propto x^2\exp(-(y^2+4)x)$, and this is the "kernel" of a $\
How to find conditional distributions from joint The "trick" is to observe that $f(x\mid y)=f(x,y)/f(y)$ is proportional to $f(x,y)$ up to terms that do not involve $x$. Hence, $f(x\mid y)\propto x^2\exp(-(y^2+4)x)$, and this is the "kernel" of a $\mathrm{Gamma}(3,y^2+4)$ density. The other full conditional $f(y\mid x)...
How to find conditional distributions from joint The "trick" is to observe that $f(x\mid y)=f(x,y)/f(y)$ is proportional to $f(x,y)$ up to terms that do not involve $x$. Hence, $f(x\mid y)\propto x^2\exp(-(y^2+4)x)$, and this is the "kernel" of a $\
44,684
How can using Logistic Regression without regularization be better?
As far as I know the idea of regularization is to have the weights as small as possible and so using lambda will penalize large weights. Deep down, regularization is really about preventing your weights from fitting the "noise" in your problem, aka overfitting. If you have more noise (i.e. as measured by the standard...
How can using Logistic Regression without regularization be better?
As far as I know the idea of regularization is to have the weights as small as possible and so using lambda will penalize large weights. Deep down, regularization is really about preventing your wei
How can using Logistic Regression without regularization be better? As far as I know the idea of regularization is to have the weights as small as possible and so using lambda will penalize large weights. Deep down, regularization is really about preventing your weights from fitting the "noise" in your problem, aka o...
How can using Logistic Regression without regularization be better? As far as I know the idea of regularization is to have the weights as small as possible and so using lambda will penalize large weights. Deep down, regularization is really about preventing your wei
44,685
Can we express logistic loss minimization as a maximum likelihood problem?
It is equivalent to the maximum likelihood approach. The different appearance results from the different coding for $y_i$ (which is arbitrary). Keeping in mind that $y_i \in \{-1,1\}$, and denoting $$\Lambda(z) = [1+\exp(-z)]^{-1}$$ we have that $$\min_w \sum_{i=1}^N \log[1+\exp(-y_iw^Tx_i)] = \max_w \sum_{i=1}^N \log...
Can we express logistic loss minimization as a maximum likelihood problem?
It is equivalent to the maximum likelihood approach. The different appearance results from the different coding for $y_i$ (which is arbitrary). Keeping in mind that $y_i \in \{-1,1\}$, and denoting $$
Can we express logistic loss minimization as a maximum likelihood problem? It is equivalent to the maximum likelihood approach. The different appearance results from the different coding for $y_i$ (which is arbitrary). Keeping in mind that $y_i \in \{-1,1\}$, and denoting $$\Lambda(z) = [1+\exp(-z)]^{-1}$$ we have that...
Can we express logistic loss minimization as a maximum likelihood problem? It is equivalent to the maximum likelihood approach. The different appearance results from the different coding for $y_i$ (which is arbitrary). Keeping in mind that $y_i \in \{-1,1\}$, and denoting $$
44,686
Probability of the limit of a sequence of events
A sufficient condition is that the events are nested $A_1 \subset A_2 \subset \ldots$ or $A_1 \supset A_2 \supset \ldots$.
Probability of the limit of a sequence of events
A sufficient condition is that the events are nested $A_1 \subset A_2 \subset \ldots$ or $A_1 \supset A_2 \supset \ldots$.
Probability of the limit of a sequence of events A sufficient condition is that the events are nested $A_1 \subset A_2 \subset \ldots$ or $A_1 \supset A_2 \supset \ldots$.
Probability of the limit of a sequence of events A sufficient condition is that the events are nested $A_1 \subset A_2 \subset \ldots$ or $A_1 \supset A_2 \supset \ldots$.
44,687
Probability of the limit of a sequence of events
This is a basic property of probability measures. One item of the definition for a probability measure says that if $B_n$ are disjoint events, then $$ P \left(\bigcup_{n \geq 1} B_n \right) = \sum_{n \geq 1}P(B_n).$$ In the first case, you can define $B_n = A_n-A_{n-1}$, which gives the result immediately. Because $P(\...
Probability of the limit of a sequence of events
This is a basic property of probability measures. One item of the definition for a probability measure says that if $B_n$ are disjoint events, then $$ P \left(\bigcup_{n \geq 1} B_n \right) = \sum_{n
Probability of the limit of a sequence of events This is a basic property of probability measures. One item of the definition for a probability measure says that if $B_n$ are disjoint events, then $$ P \left(\bigcup_{n \geq 1} B_n \right) = \sum_{n \geq 1}P(B_n).$$ In the first case, you can define $B_n = A_n-A_{n-1}$,...
Probability of the limit of a sequence of events This is a basic property of probability measures. One item of the definition for a probability measure says that if $B_n$ are disjoint events, then $$ P \left(\bigcup_{n \geq 1} B_n \right) = \sum_{n
44,688
Least squares: Calculus to find residual minimizers?
The principle underlying least squares regression is that the sum of the squares of the errors is minimized. We can use calculus to find equations for the parameters $\beta_0$ and $\beta_1$ that minimize the sum of the squared errors, $S$. $$S = \displaystyle\sum\limits_{i=1}^n \left(e_i \right)^2= \sum \left(y_i - \h...
Least squares: Calculus to find residual minimizers?
The principle underlying least squares regression is that the sum of the squares of the errors is minimized. We can use calculus to find equations for the parameters $\beta_0$ and $\beta_1$ that mini
Least squares: Calculus to find residual minimizers? The principle underlying least squares regression is that the sum of the squares of the errors is minimized. We can use calculus to find equations for the parameters $\beta_0$ and $\beta_1$ that minimize the sum of the squared errors, $S$. $$S = \displaystyle\sum\li...
Least squares: Calculus to find residual minimizers? The principle underlying least squares regression is that the sum of the squares of the errors is minimized. We can use calculus to find equations for the parameters $\beta_0$ and $\beta_1$ that mini
44,689
Least squares: Calculus to find residual minimizers?
A simpler presentation of the calculus can be done in the context of the broader multiple linear regression model, but this requires knowledge of multivariate calculus (i.e., vector calculus). In this broader setting, we have the regression model: $$\boldsymbol{Y} = \boldsymbol{x} \boldsymbol{\beta} + \boldsymbol{\var...
Least squares: Calculus to find residual minimizers?
A simpler presentation of the calculus can be done in the context of the broader multiple linear regression model, but this requires knowledge of multivariate calculus (i.e., vector calculus). In thi
Least squares: Calculus to find residual minimizers? A simpler presentation of the calculus can be done in the context of the broader multiple linear regression model, but this requires knowledge of multivariate calculus (i.e., vector calculus). In this broader setting, we have the regression model: $$\boldsymbol{Y} =...
Least squares: Calculus to find residual minimizers? A simpler presentation of the calculus can be done in the context of the broader multiple linear regression model, but this requires knowledge of multivariate calculus (i.e., vector calculus). In thi
44,690
Difference between Meta-Analysis, Meta-Regression and Moderator-Analysis
Here are some suggestions for definitions that may help to clarify the terminology: Meta-analysis: A general term to denote the collection of statistical methods and techniques used to aggregate/synthesize and compare the results from several related studies in a systematic manner. Moderator analysis: In the context o...
Difference between Meta-Analysis, Meta-Regression and Moderator-Analysis
Here are some suggestions for definitions that may help to clarify the terminology: Meta-analysis: A general term to denote the collection of statistical methods and techniques used to aggregate/synt
Difference between Meta-Analysis, Meta-Regression and Moderator-Analysis Here are some suggestions for definitions that may help to clarify the terminology: Meta-analysis: A general term to denote the collection of statistical methods and techniques used to aggregate/synthesize and compare the results from several rel...
Difference between Meta-Analysis, Meta-Regression and Moderator-Analysis Here are some suggestions for definitions that may help to clarify the terminology: Meta-analysis: A general term to denote the collection of statistical methods and techniques used to aggregate/synt
44,691
How to calculate Estimated Arithmetic Mean for a lognormal distribution
For positive data $x_1, x_2, \ldots, x_n$ let $y_i = \log(x_i)$ be their natural logarithms. Set $$\bar{y} = \frac{1}{n}(y_1+y_2+\cdots + y_n)$$ and $$s^2 = \frac{1}{n-1}\left((y_1 - \bar{y})^2 + \cdots + (y_n - \bar{y})^2\right);$$ these are the mean log and variance of the logs, respectively. The UMVUE for the arit...
How to calculate Estimated Arithmetic Mean for a lognormal distribution
For positive data $x_1, x_2, \ldots, x_n$ let $y_i = \log(x_i)$ be their natural logarithms. Set $$\bar{y} = \frac{1}{n}(y_1+y_2+\cdots + y_n)$$ and $$s^2 = \frac{1}{n-1}\left((y_1 - \bar{y})^2 + \cd
How to calculate Estimated Arithmetic Mean for a lognormal distribution For positive data $x_1, x_2, \ldots, x_n$ let $y_i = \log(x_i)$ be their natural logarithms. Set $$\bar{y} = \frac{1}{n}(y_1+y_2+\cdots + y_n)$$ and $$s^2 = \frac{1}{n-1}\left((y_1 - \bar{y})^2 + \cdots + (y_n - \bar{y})^2\right);$$ these are the ...
How to calculate Estimated Arithmetic Mean for a lognormal distribution For positive data $x_1, x_2, \ldots, x_n$ let $y_i = \log(x_i)$ be their natural logarithms. Set $$\bar{y} = \frac{1}{n}(y_1+y_2+\cdots + y_n)$$ and $$s^2 = \frac{1}{n-1}\left((y_1 - \bar{y})^2 + \cd
44,692
How to calculate Estimated Arithmetic Mean for a lognormal distribution
@whuber gave already a complete answer. For convenience, I want to share an implementation of whuber's algorithm in R along with two other solutions using pre-existing packages. Using whuber's algorithm #----------------------------------------------------------------------------- # The data #--------------------------...
How to calculate Estimated Arithmetic Mean for a lognormal distribution
@whuber gave already a complete answer. For convenience, I want to share an implementation of whuber's algorithm in R along with two other solutions using pre-existing packages. Using whuber's algorit
How to calculate Estimated Arithmetic Mean for a lognormal distribution @whuber gave already a complete answer. For convenience, I want to share an implementation of whuber's algorithm in R along with two other solutions using pre-existing packages. Using whuber's algorithm #--------------------------------------------...
How to calculate Estimated Arithmetic Mean for a lognormal distribution @whuber gave already a complete answer. For convenience, I want to share an implementation of whuber's algorithm in R along with two other solutions using pre-existing packages. Using whuber's algorit
44,693
Interpretation of Little's MCAR test
A large p-value (> 0.05) indicates weak evidence against the null hypothesis, so you fail to reject the null hypothesis, in this case the null hypothesis is that the data is MCAR, no patterns exists in the missing data. Proving the existence of MAR data is difficult but you can try if data is related between them. The ...
Interpretation of Little's MCAR test
A large p-value (> 0.05) indicates weak evidence against the null hypothesis, so you fail to reject the null hypothesis, in this case the null hypothesis is that the data is MCAR, no patterns exists i
Interpretation of Little's MCAR test A large p-value (> 0.05) indicates weak evidence against the null hypothesis, so you fail to reject the null hypothesis, in this case the null hypothesis is that the data is MCAR, no patterns exists in the missing data. Proving the existence of MAR data is difficult but you can try ...
Interpretation of Little's MCAR test A large p-value (> 0.05) indicates weak evidence against the null hypothesis, so you fail to reject the null hypothesis, in this case the null hypothesis is that the data is MCAR, no patterns exists i
44,694
Interpretation of Little's MCAR test
tl;dr: Little's test is probably not well-powered enough to detect missingness. You're probably testing for the wrong kind of missingness and won't be able to learn about the kind of missingness you really care about. The things you would do to handle data that are MAR or covariate-dependent-MCAR are things you should ...
Interpretation of Little's MCAR test
tl;dr: Little's test is probably not well-powered enough to detect missingness. You're probably testing for the wrong kind of missingness and won't be able to learn about the kind of missingness you r
Interpretation of Little's MCAR test tl;dr: Little's test is probably not well-powered enough to detect missingness. You're probably testing for the wrong kind of missingness and won't be able to learn about the kind of missingness you really care about. The things you would do to handle data that are MAR or covariate-...
Interpretation of Little's MCAR test tl;dr: Little's test is probably not well-powered enough to detect missingness. You're probably testing for the wrong kind of missingness and won't be able to learn about the kind of missingness you r
44,695
Interpretation of Little's MCAR test
As far as I know, you can look at both right or left tail of chi squared test. Having p-value of exactly 1 it is possible to say (with some caution) that your data could be artificially generated and "too random". So it could be an issue with your p-value. (here is the answer I am referring to) Another issue - chi squa...
Interpretation of Little's MCAR test
As far as I know, you can look at both right or left tail of chi squared test. Having p-value of exactly 1 it is possible to say (with some caution) that your data could be artificially generated and
Interpretation of Little's MCAR test As far as I know, you can look at both right or left tail of chi squared test. Having p-value of exactly 1 it is possible to say (with some caution) that your data could be artificially generated and "too random". So it could be an issue with your p-value. (here is the answer I am r...
Interpretation of Little's MCAR test As far as I know, you can look at both right or left tail of chi squared test. Having p-value of exactly 1 it is possible to say (with some caution) that your data could be artificially generated and
44,696
$p(D)$ in Bayesian Statistics
$P(D)$ is not a prior. It is what is called model evidence or marginal likelihood. $P(\theta)$ is the prior over the parameters of interest and $P(D)$ is $\int_{\theta} P(\theta) P(D|\theta) d\theta$. This is basically the normalisation that you need to apply to ensure that the posterior is a valid distribution. So bas...
$p(D)$ in Bayesian Statistics
$P(D)$ is not a prior. It is what is called model evidence or marginal likelihood. $P(\theta)$ is the prior over the parameters of interest and $P(D)$ is $\int_{\theta} P(\theta) P(D|\theta) d\theta$.
$p(D)$ in Bayesian Statistics $P(D)$ is not a prior. It is what is called model evidence or marginal likelihood. $P(\theta)$ is the prior over the parameters of interest and $P(D)$ is $\int_{\theta} P(\theta) P(D|\theta) d\theta$. This is basically the normalisation that you need to apply to ensure that the posterior i...
$p(D)$ in Bayesian Statistics $P(D)$ is not a prior. It is what is called model evidence or marginal likelihood. $P(\theta)$ is the prior over the parameters of interest and $P(D)$ is $\int_{\theta} P(\theta) P(D|\theta) d\theta$.
44,697
$p(D)$ in Bayesian Statistics
$P(D)$ is necessary if you characterise the full posterior. For example, if you want to do a maximum-a-posteriori (MAP) estimate of your parameters, then you do not need to worry about the normaliser as you are only trying to maximise the posterior probability of the parameters given the observation i.e. $$ P(\theta|D)...
$p(D)$ in Bayesian Statistics
$P(D)$ is necessary if you characterise the full posterior. For example, if you want to do a maximum-a-posteriori (MAP) estimate of your parameters, then you do not need to worry about the normaliser
$p(D)$ in Bayesian Statistics $P(D)$ is necessary if you characterise the full posterior. For example, if you want to do a maximum-a-posteriori (MAP) estimate of your parameters, then you do not need to worry about the normaliser as you are only trying to maximise the posterior probability of the parameters given the o...
$p(D)$ in Bayesian Statistics $P(D)$ is necessary if you characterise the full posterior. For example, if you want to do a maximum-a-posteriori (MAP) estimate of your parameters, then you do not need to worry about the normaliser
44,698
Using the gap statistic to compare algorithms
Logically, the answer should be yes: you may compare, by the same criterion, solutions different by the number of clusters and/or the clustering algorithm used. Majority of the many internal clustering criterions (one of them being Gap statistic) are not tied (in proprietary sense) to a specific clustering method: they...
Using the gap statistic to compare algorithms
Logically, the answer should be yes: you may compare, by the same criterion, solutions different by the number of clusters and/or the clustering algorithm used. Majority of the many internal clusterin
Using the gap statistic to compare algorithms Logically, the answer should be yes: you may compare, by the same criterion, solutions different by the number of clusters and/or the clustering algorithm used. Majority of the many internal clustering criterions (one of them being Gap statistic) are not tied (in proprietar...
Using the gap statistic to compare algorithms Logically, the answer should be yes: you may compare, by the same criterion, solutions different by the number of clusters and/or the clustering algorithm used. Majority of the many internal clusterin
44,699
Using the gap statistic to compare algorithms
Note that some algorithms will try to optimize the gap/silhouette/ssq, others won't. By comparing different algorithms with a measure that correlates with some of the objective functions, you will be more likely measuring how similar the algorithm is to the gap statistic, but not how good it actually works. A similar p...
Using the gap statistic to compare algorithms
Note that some algorithms will try to optimize the gap/silhouette/ssq, others won't. By comparing different algorithms with a measure that correlates with some of the objective functions, you will be
Using the gap statistic to compare algorithms Note that some algorithms will try to optimize the gap/silhouette/ssq, others won't. By comparing different algorithms with a measure that correlates with some of the objective functions, you will be more likely measuring how similar the algorithm is to the gap statistic, b...
Using the gap statistic to compare algorithms Note that some algorithms will try to optimize the gap/silhouette/ssq, others won't. By comparing different algorithms with a measure that correlates with some of the objective functions, you will be
44,700
Linear Combination of multivariate t distribution
I am trying to see if the linear combination of multivariate t distribution will give a multivariate t distribution. In general, no, this is not the case, even with univariate t's (see here and here for example; note that the difference of two t-random variables is the sum of two t-random variables, but with the secon...
Linear Combination of multivariate t distribution
I am trying to see if the linear combination of multivariate t distribution will give a multivariate t distribution. In general, no, this is not the case, even with univariate t's (see here and here
Linear Combination of multivariate t distribution I am trying to see if the linear combination of multivariate t distribution will give a multivariate t distribution. In general, no, this is not the case, even with univariate t's (see here and here for example; note that the difference of two t-random variables is the...
Linear Combination of multivariate t distribution I am trying to see if the linear combination of multivariate t distribution will give a multivariate t distribution. In general, no, this is not the case, even with univariate t's (see here and here