idx
int64
1
56k
question
stringlengths
15
155
answer
stringlengths
2
29.2k
question_cut
stringlengths
15
100
answer_cut
stringlengths
2
200
conversation
stringlengths
47
29.3k
conversation_cut
stringlengths
47
301
12,101
The limit of "unit-variance" ridge regression estimator when $\lambda\to\infty$
This is an algebraic counterpart to @Martijn's beautiful geometric answer. First of all, the limit of $$\hat{\boldsymbol\beta}_\lambda^* = \arg\min\Big\{\|\mathbf y - \mathbf X \boldsymbol \beta\|^2+\lambda\|\boldsymbol\beta\|^2\Big\} \:\:\text{s.t.}\:\: \|\mathbf X \boldsymbol\beta\|^2=1$$ when $\lambda\to\infty$ is v...
The limit of "unit-variance" ridge regression estimator when $\lambda\to\infty$
This is an algebraic counterpart to @Martijn's beautiful geometric answer. First of all, the limit of $$\hat{\boldsymbol\beta}_\lambda^* = \arg\min\Big\{\|\mathbf y - \mathbf X \boldsymbol \beta\|^2+\
The limit of "unit-variance" ridge regression estimator when $\lambda\to\infty$ This is an algebraic counterpart to @Martijn's beautiful geometric answer. First of all, the limit of $$\hat{\boldsymbol\beta}_\lambda^* = \arg\min\Big\{\|\mathbf y - \mathbf X \boldsymbol \beta\|^2+\lambda\|\boldsymbol\beta\|^2\Big\} \:\:\...
The limit of "unit-variance" ridge regression estimator when $\lambda\to\infty$ This is an algebraic counterpart to @Martijn's beautiful geometric answer. First of all, the limit of $$\hat{\boldsymbol\beta}_\lambda^* = \arg\min\Big\{\|\mathbf y - \mathbf X \boldsymbol \beta\|^2+\
12,102
How does minibatch gradient descent update the weights for each example in a batch?
Gradient descent doesn't quite work the way you suggested but a similar problem can occur. We don't calculate the average loss from the batch, we calculate the average gradients of the loss function. The gradients are the derivative of the loss with respect to the weight and in a neural network the gradient for one wei...
How does minibatch gradient descent update the weights for each example in a batch?
Gradient descent doesn't quite work the way you suggested but a similar problem can occur. We don't calculate the average loss from the batch, we calculate the average gradients of the loss function.
How does minibatch gradient descent update the weights for each example in a batch? Gradient descent doesn't quite work the way you suggested but a similar problem can occur. We don't calculate the average loss from the batch, we calculate the average gradients of the loss function. The gradients are the derivative of ...
How does minibatch gradient descent update the weights for each example in a batch? Gradient descent doesn't quite work the way you suggested but a similar problem can occur. We don't calculate the average loss from the batch, we calculate the average gradients of the loss function.
12,103
How does minibatch gradient descent update the weights for each example in a batch?
The reason to use mini batches is to have a good amount of training example such that the possible noise of it is reduced by averaging their effects, but also it's not a full batch that for many datasets could require a huge quantity of memory. One important fact is that the error that you evaluate is always a distanc...
How does minibatch gradient descent update the weights for each example in a batch?
The reason to use mini batches is to have a good amount of training example such that the possible noise of it is reduced by averaging their effects, but also it's not a full batch that for many datas
How does minibatch gradient descent update the weights for each example in a batch? The reason to use mini batches is to have a good amount of training example such that the possible noise of it is reduced by averaging their effects, but also it's not a full batch that for many datasets could require a huge quantity of...
How does minibatch gradient descent update the weights for each example in a batch? The reason to use mini batches is to have a good amount of training example such that the possible noise of it is reduced by averaging their effects, but also it's not a full batch that for many datas
12,104
Group differences on a five point Likert item
Clason & Dormody discussed the issue of statistical testing for Likert items (Analyzing data measured by individual Likert-type items). I think that a bootstrapped test is ok when the two distributions look similar (bell shaped and equal variance). However, a test for categorical data (e.g. trend or Fisher test, or ord...
Group differences on a five point Likert item
Clason & Dormody discussed the issue of statistical testing for Likert items (Analyzing data measured by individual Likert-type items). I think that a bootstrapped test is ok when the two distribution
Group differences on a five point Likert item Clason & Dormody discussed the issue of statistical testing for Likert items (Analyzing data measured by individual Likert-type items). I think that a bootstrapped test is ok when the two distributions look similar (bell shaped and equal variance). However, a test for categ...
Group differences on a five point Likert item Clason & Dormody discussed the issue of statistical testing for Likert items (Analyzing data measured by individual Likert-type items). I think that a bootstrapped test is ok when the two distribution
12,105
Group differences on a five point Likert item
Depending on the size of the dataset in question, a permutation test might be preferable to a bootstrap in that it may be able to provide an exact test of the hypothesis (and an exact CI).
Group differences on a five point Likert item
Depending on the size of the dataset in question, a permutation test might be preferable to a bootstrap in that it may be able to provide an exact test of the hypothesis (and an exact CI).
Group differences on a five point Likert item Depending on the size of the dataset in question, a permutation test might be preferable to a bootstrap in that it may be able to provide an exact test of the hypothesis (and an exact CI).
Group differences on a five point Likert item Depending on the size of the dataset in question, a permutation test might be preferable to a bootstrap in that it may be able to provide an exact test of the hypothesis (and an exact CI).
12,106
Group differences on a five point Likert item
IMHO you cannot use a t-test for Likert scales. The Likert scale is ordinal and "knows" only about relations of values of a variable: e.g. "totally dissatisfied" is worse than "somehow dissatisfied". A t-test on the other hand needs to calculate means and more and thus needs interval data. You can map Likert scale scor...
Group differences on a five point Likert item
IMHO you cannot use a t-test for Likert scales. The Likert scale is ordinal and "knows" only about relations of values of a variable: e.g. "totally dissatisfied" is worse than "somehow dissatisfied".
Group differences on a five point Likert item IMHO you cannot use a t-test for Likert scales. The Likert scale is ordinal and "knows" only about relations of values of a variable: e.g. "totally dissatisfied" is worse than "somehow dissatisfied". A t-test on the other hand needs to calculate means and more and thus need...
Group differences on a five point Likert item IMHO you cannot use a t-test for Likert scales. The Likert scale is ordinal and "knows" only about relations of values of a variable: e.g. "totally dissatisfied" is worse than "somehow dissatisfied".
12,107
Group differences on a five point Likert item
If each single item in the questionnaire is ordinal, and I don't think that this point can be disputed given how there is no way of knowing whether the quantitative difference between "strongly agree" and "agree" is the same as that between "strongly disagree" and "disagree", then why would the summation of all these o...
Group differences on a five point Likert item
If each single item in the questionnaire is ordinal, and I don't think that this point can be disputed given how there is no way of knowing whether the quantitative difference between "strongly agree"
Group differences on a five point Likert item If each single item in the questionnaire is ordinal, and I don't think that this point can be disputed given how there is no way of knowing whether the quantitative difference between "strongly agree" and "agree" is the same as that between "strongly disagree" and "disagree...
Group differences on a five point Likert item If each single item in the questionnaire is ordinal, and I don't think that this point can be disputed given how there is no way of knowing whether the quantitative difference between "strongly agree"
12,108
Group differences on a five point Likert item
Proportional odds ratio model is better then t-test for Likert item scale.
Group differences on a five point Likert item
Proportional odds ratio model is better then t-test for Likert item scale.
Group differences on a five point Likert item Proportional odds ratio model is better then t-test for Likert item scale.
Group differences on a five point Likert item Proportional odds ratio model is better then t-test for Likert item scale.
12,109
Group differences on a five point Likert item
I will try to explain proportional odds ratio model in this context since it was suggested and indicated in at least 2 answers to this question. The score test of a proportional odds model is equivalent to the Wilcoxon rank sum test. More precisely, the score test statistic for no effect of a single dichotomous covaria...
Group differences on a five point Likert item
I will try to explain proportional odds ratio model in this context since it was suggested and indicated in at least 2 answers to this question. The score test of a proportional odds model is equivale
Group differences on a five point Likert item I will try to explain proportional odds ratio model in this context since it was suggested and indicated in at least 2 answers to this question. The score test of a proportional odds model is equivalent to the Wilcoxon rank sum test. More precisely, the score test statistic...
Group differences on a five point Likert item I will try to explain proportional odds ratio model in this context since it was suggested and indicated in at least 2 answers to this question. The score test of a proportional odds model is equivale
12,110
How to interpret smooth l1 loss?
Smooth L1-loss can be interpreted as a combination of L1-loss and L2-loss. It behaves as L1-loss when the absolute value of the argument is high, and it behaves like L2-loss when the absolute value of the argument is close to zero. The equation is: $L_{1;smooth} = \begin{cases}|x| & \text{if $|x|>\alpha$;} \\ \frac{1}{...
How to interpret smooth l1 loss?
Smooth L1-loss can be interpreted as a combination of L1-loss and L2-loss. It behaves as L1-loss when the absolute value of the argument is high, and it behaves like L2-loss when the absolute value of
How to interpret smooth l1 loss? Smooth L1-loss can be interpreted as a combination of L1-loss and L2-loss. It behaves as L1-loss when the absolute value of the argument is high, and it behaves like L2-loss when the absolute value of the argument is close to zero. The equation is: $L_{1;smooth} = \begin{cases}|x| & \te...
How to interpret smooth l1 loss? Smooth L1-loss can be interpreted as a combination of L1-loss and L2-loss. It behaves as L1-loss when the absolute value of the argument is high, and it behaves like L2-loss when the absolute value of
12,111
Difference between scikit-learn implementations of PCA and TruncatedSVD
PCA and TruncatedSVD scikit-learn implementations seem to be exactly the same algorithm. No: PCA is (truncated) SVD on centered data (by per-feature mean substraction). If the data is already centered, those two classes will do the same. In practice TruncatedSVD is useful on large sparse datasets which cannot be cente...
Difference between scikit-learn implementations of PCA and TruncatedSVD
PCA and TruncatedSVD scikit-learn implementations seem to be exactly the same algorithm. No: PCA is (truncated) SVD on centered data (by per-feature mean substraction). If the data is already centere
Difference between scikit-learn implementations of PCA and TruncatedSVD PCA and TruncatedSVD scikit-learn implementations seem to be exactly the same algorithm. No: PCA is (truncated) SVD on centered data (by per-feature mean substraction). If the data is already centered, those two classes will do the same. In practi...
Difference between scikit-learn implementations of PCA and TruncatedSVD PCA and TruncatedSVD scikit-learn implementations seem to be exactly the same algorithm. No: PCA is (truncated) SVD on centered data (by per-feature mean substraction). If the data is already centere
12,112
Difference between scikit-learn implementations of PCA and TruncatedSVD
There is also a difference in how attribute explained_variance_ is calculated. Let the data matrix $\mathbf X$ be of $n \times p$ size, where $n$ is the number of samples and $p$ is the number of variables. And $\mathbf{X}_c$ is the centered data matrix, i.e. column means have been subtracted and are now equal to zero ...
Difference between scikit-learn implementations of PCA and TruncatedSVD
There is also a difference in how attribute explained_variance_ is calculated. Let the data matrix $\mathbf X$ be of $n \times p$ size, where $n$ is the number of samples and $p$ is the number of vari
Difference between scikit-learn implementations of PCA and TruncatedSVD There is also a difference in how attribute explained_variance_ is calculated. Let the data matrix $\mathbf X$ be of $n \times p$ size, where $n$ is the number of samples and $p$ is the number of variables. And $\mathbf{X}_c$ is the centered data m...
Difference between scikit-learn implementations of PCA and TruncatedSVD There is also a difference in how attribute explained_variance_ is calculated. Let the data matrix $\mathbf X$ be of $n \times p$ size, where $n$ is the number of samples and $p$ is the number of vari
12,113
Explanation of what Nate Silver said about loess
The problem with lowess or loess is that it uses a polynomial interpolation. It is well known in forecasting that polynomials have erratic behavior in the tails. When interpolating, piecewise 3rd degree polynomials provide excellent and flexible modeling of trends whereas extrapolating beyond the range of observed data...
Explanation of what Nate Silver said about loess
The problem with lowess or loess is that it uses a polynomial interpolation. It is well known in forecasting that polynomials have erratic behavior in the tails. When interpolating, piecewise 3rd degr
Explanation of what Nate Silver said about loess The problem with lowess or loess is that it uses a polynomial interpolation. It is well known in forecasting that polynomials have erratic behavior in the tails. When interpolating, piecewise 3rd degree polynomials provide excellent and flexible modeling of trends wherea...
Explanation of what Nate Silver said about loess The problem with lowess or loess is that it uses a polynomial interpolation. It is well known in forecasting that polynomials have erratic behavior in the tails. When interpolating, piecewise 3rd degr
12,114
Next steps after "Bayesian Reasoning and Machine Learning"
I'd not heard of the Barber book before, but having had a quick look through it, it does look very very good. Unless you've got a particular field you want to look into I'd suggest the following (some/many of which you've probably already heard of): Information theory, inference and learning algorithms, by D.J.C Macka...
Next steps after "Bayesian Reasoning and Machine Learning"
I'd not heard of the Barber book before, but having had a quick look through it, it does look very very good. Unless you've got a particular field you want to look into I'd suggest the following (some
Next steps after "Bayesian Reasoning and Machine Learning" I'd not heard of the Barber book before, but having had a quick look through it, it does look very very good. Unless you've got a particular field you want to look into I'd suggest the following (some/many of which you've probably already heard of): Informatio...
Next steps after "Bayesian Reasoning and Machine Learning" I'd not heard of the Barber book before, but having had a quick look through it, it does look very very good. Unless you've got a particular field you want to look into I'd suggest the following (some
12,115
Next steps after "Bayesian Reasoning and Machine Learning"
I recently found a more computational perspective Bayesian reasoning and statistics: "Probabilistic Programming and Bayesian Methods for Hackers". This is probably equally as good an introduction to Bayesian methods as Barber.
Next steps after "Bayesian Reasoning and Machine Learning"
I recently found a more computational perspective Bayesian reasoning and statistics: "Probabilistic Programming and Bayesian Methods for Hackers". This is probably equally as good an introduction to B
Next steps after "Bayesian Reasoning and Machine Learning" I recently found a more computational perspective Bayesian reasoning and statistics: "Probabilistic Programming and Bayesian Methods for Hackers". This is probably equally as good an introduction to Bayesian methods as Barber.
Next steps after "Bayesian Reasoning and Machine Learning" I recently found a more computational perspective Bayesian reasoning and statistics: "Probabilistic Programming and Bayesian Methods for Hackers". This is probably equally as good an introduction to B
12,116
What is empirical entropy?
If the data is $x^n = x_1 \ldots x_n$, that is, an $n$-sequence from a sample space $\mathcal{X}$, the empirical point probabilities are $$\hat{p}(x) = \frac{1}{n}|\{ i \mid x_i = x\}| = \frac{1}{n} \sum_{i=1}^n \delta_x(x_i)$$ for $x \in \mathcal{X}$. Here $\delta_x(x_i)$ is one if $x_i = x$ and zero otherwise. That ...
What is empirical entropy?
If the data is $x^n = x_1 \ldots x_n$, that is, an $n$-sequence from a sample space $\mathcal{X}$, the empirical point probabilities are $$\hat{p}(x) = \frac{1}{n}|\{ i \mid x_i = x\}| = \frac{1}{n}
What is empirical entropy? If the data is $x^n = x_1 \ldots x_n$, that is, an $n$-sequence from a sample space $\mathcal{X}$, the empirical point probabilities are $$\hat{p}(x) = \frac{1}{n}|\{ i \mid x_i = x\}| = \frac{1}{n} \sum_{i=1}^n \delta_x(x_i)$$ for $x \in \mathcal{X}$. Here $\delta_x(x_i)$ is one if $x_i = x...
What is empirical entropy? If the data is $x^n = x_1 \ldots x_n$, that is, an $n$-sequence from a sample space $\mathcal{X}$, the empirical point probabilities are $$\hat{p}(x) = \frac{1}{n}|\{ i \mid x_i = x\}| = \frac{1}{n}
12,117
What is empirical entropy?
Entropy is defined for probability distributions. When you do not have one, but only data, and plug in a naive estimator of the probability distribution, you get empirical entropy. This is easiest for discrete (multinomial) distributions, as shown in another answer, but can also be done for other distributions by binni...
What is empirical entropy?
Entropy is defined for probability distributions. When you do not have one, but only data, and plug in a naive estimator of the probability distribution, you get empirical entropy. This is easiest for
What is empirical entropy? Entropy is defined for probability distributions. When you do not have one, but only data, and plug in a naive estimator of the probability distribution, you get empirical entropy. This is easiest for discrete (multinomial) distributions, as shown in another answer, but can also be done for o...
What is empirical entropy? Entropy is defined for probability distributions. When you do not have one, but only data, and plug in a naive estimator of the probability distribution, you get empirical entropy. This is easiest for
12,118
On the "strength" of weak learners
This may be more in bagging spirit, but nevertheless: If you really have a strong learner, there is no need to improve it by any ensemble stuff. I would say... irrelevant. In blending and bagging trivially, in boosting making a too strong classifier may lead to some breaches in convergence (i.e. a lucky prediction may...
On the "strength" of weak learners
This may be more in bagging spirit, but nevertheless: If you really have a strong learner, there is no need to improve it by any ensemble stuff. I would say... irrelevant. In blending and bagging tri
On the "strength" of weak learners This may be more in bagging spirit, but nevertheless: If you really have a strong learner, there is no need to improve it by any ensemble stuff. I would say... irrelevant. In blending and bagging trivially, in boosting making a too strong classifier may lead to some breaches in conve...
On the "strength" of weak learners This may be more in bagging spirit, but nevertheless: If you really have a strong learner, there is no need to improve it by any ensemble stuff. I would say... irrelevant. In blending and bagging tri
12,119
On the "strength" of weak learners
First, the notions of "weak" and "strong" are only weakly defined. From my point of view they must be defined relative to the optimal Bayes classifier, which is the target of any training algorithm. With this in mind, my reply to three of the points are as follows. Computational as I see it. Most weak learners I know ...
On the "strength" of weak learners
First, the notions of "weak" and "strong" are only weakly defined. From my point of view they must be defined relative to the optimal Bayes classifier, which is the target of any training algorithm. W
On the "strength" of weak learners First, the notions of "weak" and "strong" are only weakly defined. From my point of view they must be defined relative to the optimal Bayes classifier, which is the target of any training algorithm. With this in mind, my reply to three of the points are as follows. Computational as I...
On the "strength" of weak learners First, the notions of "weak" and "strong" are only weakly defined. From my point of view they must be defined relative to the optimal Bayes classifier, which is the target of any training algorithm. W
12,120
What is the NULL hypothesis for interaction in a two-way ANOVA?
I think it's important to clearly separate the hypothesis and its corresponding test. For the following, I assume a balanced, between-subjects CRF-$pq$ design (equal cell sizes, Kirk's notation: Completely Randomized Factorial design). $Y_{ijk}$ is observation $i$ in treatment $j$ of factor $A$ and treatment $k$ of fac...
What is the NULL hypothesis for interaction in a two-way ANOVA?
I think it's important to clearly separate the hypothesis and its corresponding test. For the following, I assume a balanced, between-subjects CRF-$pq$ design (equal cell sizes, Kirk's notation: Compl
What is the NULL hypothesis for interaction in a two-way ANOVA? I think it's important to clearly separate the hypothesis and its corresponding test. For the following, I assume a balanced, between-subjects CRF-$pq$ design (equal cell sizes, Kirk's notation: Completely Randomized Factorial design). $Y_{ijk}$ is observa...
What is the NULL hypothesis for interaction in a two-way ANOVA? I think it's important to clearly separate the hypothesis and its corresponding test. For the following, I assume a balanced, between-subjects CRF-$pq$ design (equal cell sizes, Kirk's notation: Compl
12,121
What is the NULL hypothesis for interaction in a two-way ANOVA?
An interaction tells us that the levels of factor A have different effects based on what level of factor B you're applying. So we can test this through a linear contrast. Let C = (A1B1 - A1B2) - (A2B1 - A2B2) where A1B1 stands for the mean of the group that received A1 and B1 and so on. So here we're looking at A1B...
What is the NULL hypothesis for interaction in a two-way ANOVA?
An interaction tells us that the levels of factor A have different effects based on what level of factor B you're applying. So we can test this through a linear contrast. Let C = (A1B1 - A1B2) - (A2
What is the NULL hypothesis for interaction in a two-way ANOVA? An interaction tells us that the levels of factor A have different effects based on what level of factor B you're applying. So we can test this through a linear contrast. Let C = (A1B1 - A1B2) - (A2B1 - A2B2) where A1B1 stands for the mean of the group ...
What is the NULL hypothesis for interaction in a two-way ANOVA? An interaction tells us that the levels of factor A have different effects based on what level of factor B you're applying. So we can test this through a linear contrast. Let C = (A1B1 - A1B2) - (A2
12,122
Why is Gaussian distribution on high dimensional space like a soap bubble
I can't answer about what the OP's famous post claims, but let us consider the simpler case of uniform distributions on the unit disc: $(X,Y)$ is uniformly distributed on the unit disc (that is, $f(X,Y)(x,y) = \frac 1\pi$ for $x^2+y^2 < 1$. What is the probability that $(X,Y)$ is closer to the unit circle, that is, cl...
Why is Gaussian distribution on high dimensional space like a soap bubble
I can't answer about what the OP's famous post claims, but let us consider the simpler case of uniform distributions on the unit disc: $(X,Y)$ is uniformly distributed on the unit disc (that is, $f(X,
Why is Gaussian distribution on high dimensional space like a soap bubble I can't answer about what the OP's famous post claims, but let us consider the simpler case of uniform distributions on the unit disc: $(X,Y)$ is uniformly distributed on the unit disc (that is, $f(X,Y)(x,y) = \frac 1\pi$ for $x^2+y^2 < 1$. What...
Why is Gaussian distribution on high dimensional space like a soap bubble I can't answer about what the OP's famous post claims, but let us consider the simpler case of uniform distributions on the unit disc: $(X,Y)$ is uniformly distributed on the unit disc (that is, $f(X,
12,123
Why is Gaussian distribution on high dimensional space like a soap bubble
The post you link to concerns the use of the normal distribution in high-dimensional problems. So, suppose you are working in a space $\mathbb{R}^m$ where the dimension $m$ is large. Let $\boldsymbol{I}$ be the $m$-dimensional identity matrix and consider a normal random vector: $$\mathbf{X} \equiv (X_1,...,X_m) \sim...
Why is Gaussian distribution on high dimensional space like a soap bubble
The post you link to concerns the use of the normal distribution in high-dimensional problems. So, suppose you are working in a space $\mathbb{R}^m$ where the dimension $m$ is large. Let $\boldsymbo
Why is Gaussian distribution on high dimensional space like a soap bubble The post you link to concerns the use of the normal distribution in high-dimensional problems. So, suppose you are working in a space $\mathbb{R}^m$ where the dimension $m$ is large. Let $\boldsymbol{I}$ be the $m$-dimensional identity matrix a...
Why is Gaussian distribution on high dimensional space like a soap bubble The post you link to concerns the use of the normal distribution in high-dimensional problems. So, suppose you are working in a space $\mathbb{R}^m$ where the dimension $m$ is large. Let $\boldsymbo
12,124
Why is Gaussian distribution on high dimensional space like a soap bubble
I really think that the vision of an empty bubble is misleading. (tl-dr: Instead of an empty bubble I think is better to say that it resembles to a star with n-vertices where $n\rightarrow\infty$, or some kind of a non empty fractal structure with the length of its border going to $\infty$). But still more dense in the...
Why is Gaussian distribution on high dimensional space like a soap bubble
I really think that the vision of an empty bubble is misleading. (tl-dr: Instead of an empty bubble I think is better to say that it resembles to a star with n-vertices where $n\rightarrow\infty$, or
Why is Gaussian distribution on high dimensional space like a soap bubble I really think that the vision of an empty bubble is misleading. (tl-dr: Instead of an empty bubble I think is better to say that it resembles to a star with n-vertices where $n\rightarrow\infty$, or some kind of a non empty fractal structure wit...
Why is Gaussian distribution on high dimensional space like a soap bubble I really think that the vision of an empty bubble is misleading. (tl-dr: Instead of an empty bubble I think is better to say that it resembles to a star with n-vertices where $n\rightarrow\infty$, or
12,125
Why is Gaussian distribution on high dimensional space like a soap bubble
This is an old post with some great responses but I'd like to give a different perspective. Assume we take a sample $x$ from $\mathcal{N}(\vec0, \mathcal{I})$ in $D$ dimensions. If the high-dimensional Gaussian is hollow, then that would mean at least one coordinate of our sample $x$ deviates from the mean. By the CDF ...
Why is Gaussian distribution on high dimensional space like a soap bubble
This is an old post with some great responses but I'd like to give a different perspective. Assume we take a sample $x$ from $\mathcal{N}(\vec0, \mathcal{I})$ in $D$ dimensions. If the high-dimensiona
Why is Gaussian distribution on high dimensional space like a soap bubble This is an old post with some great responses but I'd like to give a different perspective. Assume we take a sample $x$ from $\mathcal{N}(\vec0, \mathcal{I})$ in $D$ dimensions. If the high-dimensional Gaussian is hollow, then that would mean at ...
Why is Gaussian distribution on high dimensional space like a soap bubble This is an old post with some great responses but I'd like to give a different perspective. Assume we take a sample $x$ from $\mathcal{N}(\vec0, \mathcal{I})$ in $D$ dimensions. If the high-dimensiona
12,126
Why is Gaussian distribution on high dimensional space like a soap bubble
I don't think it is true that "a Gaussian distribution in higher dimensions looks like a soap bubble". But first let's see why, in accordance with some of the very detailed responses above, one might be led to think so. In Cartesian coordinates in $D$ dimensions, after standardization, the probability density looks lik...
Why is Gaussian distribution on high dimensional space like a soap bubble
I don't think it is true that "a Gaussian distribution in higher dimensions looks like a soap bubble". But first let's see why, in accordance with some of the very detailed responses above, one might
Why is Gaussian distribution on high dimensional space like a soap bubble I don't think it is true that "a Gaussian distribution in higher dimensions looks like a soap bubble". But first let's see why, in accordance with some of the very detailed responses above, one might be led to think so. In Cartesian coordinates i...
Why is Gaussian distribution on high dimensional space like a soap bubble I don't think it is true that "a Gaussian distribution in higher dimensions looks like a soap bubble". But first let's see why, in accordance with some of the very detailed responses above, one might
12,127
When to "add" layers and when to "concatenate" in neural networks?
Adding is nice if you want to interpret one of the inputs as a residual "correction" or "delta" to the other input. For example, the residual connections in ResNet are often interpreted as successively refining the feature maps. Concatenating may be more natural if the two inputs aren't very closely related. However, t...
When to "add" layers and when to "concatenate" in neural networks?
Adding is nice if you want to interpret one of the inputs as a residual "correction" or "delta" to the other input. For example, the residual connections in ResNet are often interpreted as successivel
When to "add" layers and when to "concatenate" in neural networks? Adding is nice if you want to interpret one of the inputs as a residual "correction" or "delta" to the other input. For example, the residual connections in ResNet are often interpreted as successively refining the feature maps. Concatenating may be mor...
When to "add" layers and when to "concatenate" in neural networks? Adding is nice if you want to interpret one of the inputs as a residual "correction" or "delta" to the other input. For example, the residual connections in ResNet are often interpreted as successivel
12,128
When to "add" layers and when to "concatenate" in neural networks?
I am not an expert, but based on my light reading, 'addition' is used for 'identity links' in constructs such as Residue Blocks to preserve information prior to convolution, which as the pros said is useful as the network goes deeper. Concatenation is quite confusing when it comes to "how does it help?". As you said, ...
When to "add" layers and when to "concatenate" in neural networks?
I am not an expert, but based on my light reading, 'addition' is used for 'identity links' in constructs such as Residue Blocks to preserve information prior to convolution, which as the pros said is
When to "add" layers and when to "concatenate" in neural networks? I am not an expert, but based on my light reading, 'addition' is used for 'identity links' in constructs such as Residue Blocks to preserve information prior to convolution, which as the pros said is useful as the network goes deeper. Concatenation is ...
When to "add" layers and when to "concatenate" in neural networks? I am not an expert, but based on my light reading, 'addition' is used for 'identity links' in constructs such as Residue Blocks to preserve information prior to convolution, which as the pros said is
12,129
The definition of natural cubic splines for regression
Let's start by considering ordinary cubic splines. They're cubic between every pair of knots and cubic outside the boundary knots. We start with 4df for the first cubic (left of the first boundary knot), and each knot adds one new parameter (because the continuity of cubic splines and derivatives and second derivatives...
The definition of natural cubic splines for regression
Let's start by considering ordinary cubic splines. They're cubic between every pair of knots and cubic outside the boundary knots. We start with 4df for the first cubic (left of the first boundary kno
The definition of natural cubic splines for regression Let's start by considering ordinary cubic splines. They're cubic between every pair of knots and cubic outside the boundary knots. We start with 4df for the first cubic (left of the first boundary knot), and each knot adds one new parameter (because the continuity ...
The definition of natural cubic splines for regression Let's start by considering ordinary cubic splines. They're cubic between every pair of knots and cubic outside the boundary knots. We start with 4df for the first cubic (left of the first boundary kno
12,130
The definition of natural cubic splines for regression
I detail the assertion: "This frees up four degrees of freedom (two constraints each in both boundary regions)" in an example with $2$ knots $\xi_1, \xi_2$. The related intervals are $]-\infty, \xi_1[$, $]\xi_1, \xi_2[$ and $]\xi_2, +\infty[$ (so there are $|I|=3$ intervals and $|I|-1=2$ knots). For (common) cubic spl...
The definition of natural cubic splines for regression
I detail the assertion: "This frees up four degrees of freedom (two constraints each in both boundary regions)" in an example with $2$ knots $\xi_1, \xi_2$. The related intervals are $]-\infty, \xi_1
The definition of natural cubic splines for regression I detail the assertion: "This frees up four degrees of freedom (two constraints each in both boundary regions)" in an example with $2$ knots $\xi_1, \xi_2$. The related intervals are $]-\infty, \xi_1[$, $]\xi_1, \xi_2[$ and $]\xi_2, +\infty[$ (so there are $|I|=3$...
The definition of natural cubic splines for regression I detail the assertion: "This frees up four degrees of freedom (two constraints each in both boundary regions)" in an example with $2$ knots $\xi_1, \xi_2$. The related intervals are $]-\infty, \xi_1
12,131
How should I check the assumption of linearity to the logit for the continuous independent variables in logistic regression analysis?
Logistic regression does NOT assume a linear relationship between the dependent and independent variables. It does assume a linear relationship between the log odds of the dependent variable and the independent variables (This is mainly an issue with continuous independent variables.) There is a test called the Box-T...
How should I check the assumption of linearity to the logit for the continuous independent variables
Logistic regression does NOT assume a linear relationship between the dependent and independent variables. It does assume a linear relationship between the log odds of the dependent variable and the
How should I check the assumption of linearity to the logit for the continuous independent variables in logistic regression analysis? Logistic regression does NOT assume a linear relationship between the dependent and independent variables. It does assume a linear relationship between the log odds of the dependent var...
How should I check the assumption of linearity to the logit for the continuous independent variables Logistic regression does NOT assume a linear relationship between the dependent and independent variables. It does assume a linear relationship between the log odds of the dependent variable and the
12,132
How should I check the assumption of linearity to the logit for the continuous independent variables in logistic regression analysis?
As I describe in detail in my book Regression Modeling Strategies (2nd edition available 2015-09-04, e-book available now), the process of attempting to transform variables before modeling is frought with problems, one of the most important being the distortion of type I error and confidence intervals. Categorization ...
How should I check the assumption of linearity to the logit for the continuous independent variables
As I describe in detail in my book Regression Modeling Strategies (2nd edition available 2015-09-04, e-book available now), the process of attempting to transform variables before modeling is frought
How should I check the assumption of linearity to the logit for the continuous independent variables in logistic regression analysis? As I describe in detail in my book Regression Modeling Strategies (2nd edition available 2015-09-04, e-book available now), the process of attempting to transform variables before modeli...
How should I check the assumption of linearity to the logit for the continuous independent variables As I describe in detail in my book Regression Modeling Strategies (2nd edition available 2015-09-04, e-book available now), the process of attempting to transform variables before modeling is frought
12,133
How should I check the assumption of linearity to the logit for the continuous independent variables in logistic regression analysis?
I think that we should plot continuous variables and check for linearity before using them in a regression model. If linearity seems like a reasonable assumption, I think this will probably still hold in the final multivariable regression model in most cases, and if not, I think this might primarily be caused by intera...
How should I check the assumption of linearity to the logit for the continuous independent variables
I think that we should plot continuous variables and check for linearity before using them in a regression model. If linearity seems like a reasonable assumption, I think this will probably still hold
How should I check the assumption of linearity to the logit for the continuous independent variables in logistic regression analysis? I think that we should plot continuous variables and check for linearity before using them in a regression model. If linearity seems like a reasonable assumption, I think this will proba...
How should I check the assumption of linearity to the logit for the continuous independent variables I think that we should plot continuous variables and check for linearity before using them in a regression model. If linearity seems like a reasonable assumption, I think this will probably still hold
12,134
How should I check the assumption of linearity to the logit for the continuous independent variables in logistic regression analysis?
Since I don't know your data I don't know if combining those three variables -- the basic variable, its natural log, and an interactive term -- will be a problem. However, I know that in the past when I have considered combining three terms I often lose conceptual track of what I am measuring. You need to have a go...
How should I check the assumption of linearity to the logit for the continuous independent variables
Since I don't know your data I don't know if combining those three variables -- the basic variable, its natural log, and an interactive term -- will be a problem. However, I know that in the past w
How should I check the assumption of linearity to the logit for the continuous independent variables in logistic regression analysis? Since I don't know your data I don't know if combining those three variables -- the basic variable, its natural log, and an interactive term -- will be a problem. However, I know that...
How should I check the assumption of linearity to the logit for the continuous independent variables Since I don't know your data I don't know if combining those three variables -- the basic variable, its natural log, and an interactive term -- will be a problem. However, I know that in the past w
12,135
Train a Neural Network to distinguish between even and odd numbers
As with any machine learning task, the representation of your input plays a crucial role in how well you learn and generalise. I think, the problem with the representation is that the function (modulo) is highly non-linear and not smooth in the input representation you've chosen for this problem. I would try the follow...
Train a Neural Network to distinguish between even and odd numbers
As with any machine learning task, the representation of your input plays a crucial role in how well you learn and generalise. I think, the problem with the representation is that the function (modulo
Train a Neural Network to distinguish between even and odd numbers As with any machine learning task, the representation of your input plays a crucial role in how well you learn and generalise. I think, the problem with the representation is that the function (modulo) is highly non-linear and not smooth in the input re...
Train a Neural Network to distinguish between even and odd numbers As with any machine learning task, the representation of your input plays a crucial role in how well you learn and generalise. I think, the problem with the representation is that the function (modulo
12,136
Train a Neural Network to distinguish between even and odd numbers
Learning to classify odd numbers and even numbers is a difficult problem. A simple pattern keeps repeating infinitely. 2,4,6,8..... 1,3,5,7..... Nonlinear activation functions like sin(x) and cos(x) behave similarly. Therefore, if you change your neurons to implement sin and cos instead of popular activation function...
Train a Neural Network to distinguish between even and odd numbers
Learning to classify odd numbers and even numbers is a difficult problem. A simple pattern keeps repeating infinitely. 2,4,6,8..... 1,3,5,7..... Nonlinear activation functions like sin(x) and cos(x)
Train a Neural Network to distinguish between even and odd numbers Learning to classify odd numbers and even numbers is a difficult problem. A simple pattern keeps repeating infinitely. 2,4,6,8..... 1,3,5,7..... Nonlinear activation functions like sin(x) and cos(x) behave similarly. Therefore, if you change your neuro...
Train a Neural Network to distinguish between even and odd numbers Learning to classify odd numbers and even numbers is a difficult problem. A simple pattern keeps repeating infinitely. 2,4,6,8..... 1,3,5,7..... Nonlinear activation functions like sin(x) and cos(x)
12,137
Train a Neural Network to distinguish between even and odd numbers
So I'm working with neural nets right now and I ran into the same issue as you. What I ended up doing was representing the input number as an array with values equal to the binary representation of the number. Since what we are doing is classifying I represented my output as an array, not a single value. ex: input = [ ...
Train a Neural Network to distinguish between even and odd numbers
So I'm working with neural nets right now and I ran into the same issue as you. What I ended up doing was representing the input number as an array with values equal to the binary representation of th
Train a Neural Network to distinguish between even and odd numbers So I'm working with neural nets right now and I ran into the same issue as you. What I ended up doing was representing the input number as an array with values equal to the binary representation of the number. Since what we are doing is classifying I re...
Train a Neural Network to distinguish between even and odd numbers So I'm working with neural nets right now and I ran into the same issue as you. What I ended up doing was representing the input number as an array with values equal to the binary representation of th
12,138
Train a Neural Network to distinguish between even and odd numbers
I get here where was struggle with similar problem. So I write what I managed. As far as I know one layer perceptron is able to solve every problem, which can be at the end simplified to divide objects in any geometry using straight line. And this is this kind of problem. If you draw last bit of binary representation ...
Train a Neural Network to distinguish between even and odd numbers
I get here where was struggle with similar problem. So I write what I managed. As far as I know one layer perceptron is able to solve every problem, which can be at the end simplified to divide objec
Train a Neural Network to distinguish between even and odd numbers I get here where was struggle with similar problem. So I write what I managed. As far as I know one layer perceptron is able to solve every problem, which can be at the end simplified to divide objects in any geometry using straight line. And this is t...
Train a Neural Network to distinguish between even and odd numbers I get here where was struggle with similar problem. So I write what I managed. As far as I know one layer perceptron is able to solve every problem, which can be at the end simplified to divide objec
12,139
Train a Neural Network to distinguish between even and odd numbers
It is well known that logic gates NOT, AND, OR can all be done with very simple neural networks (NN), and that you can build a complete arithmetic calculator with logic gates using binary numbers as input. Therefore you should be able to create a NN to calculate n modulo k, for any n and k numbers expressed in base 2....
Train a Neural Network to distinguish between even and odd numbers
It is well known that logic gates NOT, AND, OR can all be done with very simple neural networks (NN), and that you can build a complete arithmetic calculator with logic gates using binary numbers as i
Train a Neural Network to distinguish between even and odd numbers It is well known that logic gates NOT, AND, OR can all be done with very simple neural networks (NN), and that you can build a complete arithmetic calculator with logic gates using binary numbers as input. Therefore you should be able to create a NN to...
Train a Neural Network to distinguish between even and odd numbers It is well known that logic gates NOT, AND, OR can all be done with very simple neural networks (NN), and that you can build a complete arithmetic calculator with logic gates using binary numbers as i
12,140
Train a Neural Network to distinguish between even and odd numbers
My solution import numpy as np def layer_1_z(x, w1, b1): return 1 / w1 * x + b1 def layer_2(x, w1, b1, w2, b2): y1 = layer_1_z(x, w1, b1) y2 = y1 - np.floor(y1) return w2 * y2 + b2 def layer_2_activation(x, w1, b1, w2, b2): y2 = layer_2(x, w1, b1, w2, b2) # return 1 / (1 + np.exp(-y2)) ...
Train a Neural Network to distinguish between even and odd numbers
My solution import numpy as np def layer_1_z(x, w1, b1): return 1 / w1 * x + b1 def layer_2(x, w1, b1, w2, b2): y1 = layer_1_z(x, w1, b1) y2 = y1 - np.floor(y1) return w2 * y2 + b2
Train a Neural Network to distinguish between even and odd numbers My solution import numpy as np def layer_1_z(x, w1, b1): return 1 / w1 * x + b1 def layer_2(x, w1, b1, w2, b2): y1 = layer_1_z(x, w1, b1) y2 = y1 - np.floor(y1) return w2 * y2 + b2 def layer_2_activation(x, w1, b1, w2, b2): y2 ...
Train a Neural Network to distinguish between even and odd numbers My solution import numpy as np def layer_1_z(x, w1, b1): return 1 / w1 * x + b1 def layer_2(x, w1, b1, w2, b2): y1 = layer_1_z(x, w1, b1) y2 = y1 - np.floor(y1) return w2 * y2 + b2
12,141
Train a Neural Network to distinguish between even and odd numbers
One idea evading the use of explicit "mod 2" in the input could be to codify the number as a sequence of pixels, then the problem amounts to recognize if the segment can be split into two equal segments. This is a machine vision problem and that could be learned by conventional networks. On the other extreme, if the nu...
Train a Neural Network to distinguish between even and odd numbers
One idea evading the use of explicit "mod 2" in the input could be to codify the number as a sequence of pixels, then the problem amounts to recognize if the segment can be split into two equal segmen
Train a Neural Network to distinguish between even and odd numbers One idea evading the use of explicit "mod 2" in the input could be to codify the number as a sequence of pixels, then the problem amounts to recognize if the segment can be split into two equal segments. This is a machine vision problem and that could b...
Train a Neural Network to distinguish between even and odd numbers One idea evading the use of explicit "mod 2" in the input could be to codify the number as a sequence of pixels, then the problem amounts to recognize if the segment can be split into two equal segmen
12,142
Train a Neural Network to distinguish between even and odd numbers
I created such a network in here. The representation @William Gottschalk gave was the foundation. It just uses 1 neuron in the first hidden layer with 32 inputs. The output layer has just 2 neurons for one-hot encoding of 0 and 1.
Train a Neural Network to distinguish between even and odd numbers
I created such a network in here. The representation @William Gottschalk gave was the foundation. It just uses 1 neuron in the first hidden layer with 32 inputs. The output layer has just 2 neurons fo
Train a Neural Network to distinguish between even and odd numbers I created such a network in here. The representation @William Gottschalk gave was the foundation. It just uses 1 neuron in the first hidden layer with 32 inputs. The output layer has just 2 neurons for one-hot encoding of 0 and 1.
Train a Neural Network to distinguish between even and odd numbers I created such a network in here. The representation @William Gottschalk gave was the foundation. It just uses 1 neuron in the first hidden layer with 32 inputs. The output layer has just 2 neurons fo
12,143
What exactly are moments? How are they derived?
It's been a long time since I took a physics class, so let me know if any of this is incorrect. General description of moments with physical analogs Take a random variable, $X$. The $n$-th moment of $X$ around $c$ is: $$m_n(c)=E[(X-c)^n]$$ This corresponds exactly to the physical sense of a moment. Imagine $X$ as a col...
What exactly are moments? How are they derived?
It's been a long time since I took a physics class, so let me know if any of this is incorrect. General description of moments with physical analogs Take a random variable, $X$. The $n$-th moment of $
What exactly are moments? How are they derived? It's been a long time since I took a physics class, so let me know if any of this is incorrect. General description of moments with physical analogs Take a random variable, $X$. The $n$-th moment of $X$ around $c$ is: $$m_n(c)=E[(X-c)^n]$$ This corresponds exactly to the ...
What exactly are moments? How are they derived? It's been a long time since I took a physics class, so let me know if any of this is incorrect. General description of moments with physical analogs Take a random variable, $X$. The $n$-th moment of $
12,144
What exactly are moments? How are they derived?
This is a bit of an old thread, but i wish to correct a misstatement in the comment by Fg Nu who wrote "Moments are parameterized by the natural numbers, and completely characterize a distribution". Moments do NOT completely characterize a distribution. Specifically, knowledge of all infinite number of moments, even if...
What exactly are moments? How are they derived?
This is a bit of an old thread, but i wish to correct a misstatement in the comment by Fg Nu who wrote "Moments are parameterized by the natural numbers, and completely characterize a distribution". M
What exactly are moments? How are they derived? This is a bit of an old thread, but i wish to correct a misstatement in the comment by Fg Nu who wrote "Moments are parameterized by the natural numbers, and completely characterize a distribution". Moments do NOT completely characterize a distribution. Specifically, know...
What exactly are moments? How are they derived? This is a bit of an old thread, but i wish to correct a misstatement in the comment by Fg Nu who wrote "Moments are parameterized by the natural numbers, and completely characterize a distribution". M
12,145
What exactly are moments? How are they derived?
A corollary to Glen_b's remarks is that the first moment, the mean, corresponds to the center of gravity for a physical object, and the second moment around the mean, the variance, corresponds to its moment of inertia. After that, you're on your own.
What exactly are moments? How are they derived?
A corollary to Glen_b's remarks is that the first moment, the mean, corresponds to the center of gravity for a physical object, and the second moment around the mean, the variance, corresponds to its
What exactly are moments? How are they derived? A corollary to Glen_b's remarks is that the first moment, the mean, corresponds to the center of gravity for a physical object, and the second moment around the mean, the variance, corresponds to its moment of inertia. After that, you're on your own.
What exactly are moments? How are they derived? A corollary to Glen_b's remarks is that the first moment, the mean, corresponds to the center of gravity for a physical object, and the second moment around the mean, the variance, corresponds to its
12,146
What exactly are moments? How are they derived?
A binomial tree has two branches each with a probably of 0.5. Actually, p=0.5, and q=1-0.5=0.5. This generates a normal distribution with an evenly distributed probability mass. Actually, we have to assume that each tier in the tree is complete. When we break data up into bins, we get a real number from the division, ...
What exactly are moments? How are they derived?
A binomial tree has two branches each with a probably of 0.5. Actually, p=0.5, and q=1-0.5=0.5. This generates a normal distribution with an evenly distributed probability mass. Actually, we have to
What exactly are moments? How are they derived? A binomial tree has two branches each with a probably of 0.5. Actually, p=0.5, and q=1-0.5=0.5. This generates a normal distribution with an evenly distributed probability mass. Actually, we have to assume that each tier in the tree is complete. When we break data up int...
What exactly are moments? How are they derived? A binomial tree has two branches each with a probably of 0.5. Actually, p=0.5, and q=1-0.5=0.5. This generates a normal distribution with an evenly distributed probability mass. Actually, we have to
12,147
What exactly are moments? How are they derived?
How can I build intuition for what moments really are? In the context of computer vision, recognizing two-dimensional shapes such as letters or geometric objects from pixel data, a classic article exploiting moments is: Ming-Kuei Hu, "Visual pattern recognition by moment invariants," in IRE Transactions on Information...
What exactly are moments? How are they derived?
How can I build intuition for what moments really are? In the context of computer vision, recognizing two-dimensional shapes such as letters or geometric objects from pixel data, a classic article ex
What exactly are moments? How are they derived? How can I build intuition for what moments really are? In the context of computer vision, recognizing two-dimensional shapes such as letters or geometric objects from pixel data, a classic article exploiting moments is: Ming-Kuei Hu, "Visual pattern recognition by moment...
What exactly are moments? How are they derived? How can I build intuition for what moments really are? In the context of computer vision, recognizing two-dimensional shapes such as letters or geometric objects from pixel data, a classic article ex
12,148
Uniform random variable as sum of two random variables
The result can be proven with a picture: the visible gray areas show that a uniform distribution cannot be decomposed as a sum of two independent identically distributed variables. Notation Let $X$ and $Y$ be iid such that $X+Y$ has a uniform distribution on $[0,1]$. This means that for all $0\le a \le b \le 1$, $$\Pr...
Uniform random variable as sum of two random variables
The result can be proven with a picture: the visible gray areas show that a uniform distribution cannot be decomposed as a sum of two independent identically distributed variables. Notation Let $X$ an
Uniform random variable as sum of two random variables The result can be proven with a picture: the visible gray areas show that a uniform distribution cannot be decomposed as a sum of two independent identically distributed variables. Notation Let $X$ and $Y$ be iid such that $X+Y$ has a uniform distribution on $[0,1]...
Uniform random variable as sum of two random variables The result can be proven with a picture: the visible gray areas show that a uniform distribution cannot be decomposed as a sum of two independent identically distributed variables. Notation Let $X$ an
12,149
Uniform random variable as sum of two random variables
I tried finding a proof without considering characteristic functions. Excess kurtosis does the trick. Here's the two-line answer: $\text{Kurt}(U) = \text{Kurt}(X + Y) = \text{Kurt}(X) / 2$ since $X$ and $Y$ are iid. Then $\text{Kurt}(U) = -1.2$ implies $\text{Kurt}(X) = -2.4$ which is a contradiction as $\text{Kurt}(X...
Uniform random variable as sum of two random variables
I tried finding a proof without considering characteristic functions. Excess kurtosis does the trick. Here's the two-line answer: $\text{Kurt}(U) = \text{Kurt}(X + Y) = \text{Kurt}(X) / 2$ since $X$ a
Uniform random variable as sum of two random variables I tried finding a proof without considering characteristic functions. Excess kurtosis does the trick. Here's the two-line answer: $\text{Kurt}(U) = \text{Kurt}(X + Y) = \text{Kurt}(X) / 2$ since $X$ and $Y$ are iid. Then $\text{Kurt}(U) = -1.2$ implies $\text{Kurt...
Uniform random variable as sum of two random variables I tried finding a proof without considering characteristic functions. Excess kurtosis does the trick. Here's the two-line answer: $\text{Kurt}(U) = \text{Kurt}(X + Y) = \text{Kurt}(X) / 2$ since $X$ a
12,150
Uniform random variable as sum of two random variables
Assume $U = X + Y$ for two i.i.d random variables. First note that since $U$ has support $[0, 1]$, $X$ must be a bounded random variable (bounded by $1/2$), as a result of \begin{align} P[X > 1/2]^2 = P[X > 1/2, Y > 1/2] \leq P[X + Y > 1] = P[U > 1] = 0. \end{align} This shows that $X$ has moment of order $n$ for all $...
Uniform random variable as sum of two random variables
Assume $U = X + Y$ for two i.i.d random variables. First note that since $U$ has support $[0, 1]$, $X$ must be a bounded random variable (bounded by $1/2$), as a result of \begin{align} P[X > 1/2]^2 =
Uniform random variable as sum of two random variables Assume $U = X + Y$ for two i.i.d random variables. First note that since $U$ has support $[0, 1]$, $X$ must be a bounded random variable (bounded by $1/2$), as a result of \begin{align} P[X > 1/2]^2 = P[X > 1/2, Y > 1/2] \leq P[X + Y > 1] = P[U > 1] = 0. \end{align...
Uniform random variable as sum of two random variables Assume $U = X + Y$ for two i.i.d random variables. First note that since $U$ has support $[0, 1]$, $X$ must be a bounded random variable (bounded by $1/2$), as a result of \begin{align} P[X > 1/2]^2 =
12,151
Can I use Kolmogorov-Smirnov to compare two empirical distributions?
That is OK, and quite reasonable. It is referred to as the two-sample Kolmogorov-Smirnov test. Measuring the difference between two distribution functions by the supnorm is always sensible, but to do a formal test you want to know the distribution under the hypothesis that the two samples are independent and each i.i.d...
Can I use Kolmogorov-Smirnov to compare two empirical distributions?
That is OK, and quite reasonable. It is referred to as the two-sample Kolmogorov-Smirnov test. Measuring the difference between two distribution functions by the supnorm is always sensible, but to do
Can I use Kolmogorov-Smirnov to compare two empirical distributions? That is OK, and quite reasonable. It is referred to as the two-sample Kolmogorov-Smirnov test. Measuring the difference between two distribution functions by the supnorm is always sensible, but to do a formal test you want to know the distribution und...
Can I use Kolmogorov-Smirnov to compare two empirical distributions? That is OK, and quite reasonable. It is referred to as the two-sample Kolmogorov-Smirnov test. Measuring the difference between two distribution functions by the supnorm is always sensible, but to do
12,152
Stepwise regression in R - How does it work?
Perhaps it would be easier to understand how stepwise regression is being done by looking at all 15 possible lm models. Here's a quickie to generate formula for all 15 combinations. library(leaps) tmp<-regsubsets(mpg ~ wt + drat + disp + qsec, data=mtcars, nbest=1000, really.big=T, intercept=F) all.mods <- summary(tmp)...
Stepwise regression in R - How does it work?
Perhaps it would be easier to understand how stepwise regression is being done by looking at all 15 possible lm models. Here's a quickie to generate formula for all 15 combinations. library(leaps) tmp
Stepwise regression in R - How does it work? Perhaps it would be easier to understand how stepwise regression is being done by looking at all 15 possible lm models. Here's a quickie to generate formula for all 15 combinations. library(leaps) tmp<-regsubsets(mpg ~ wt + drat + disp + qsec, data=mtcars, nbest=1000, really...
Stepwise regression in R - How does it work? Perhaps it would be easier to understand how stepwise regression is being done by looking at all 15 possible lm models. Here's a quickie to generate formula for all 15 combinations. library(leaps) tmp
12,153
Stepwise regression in R - How does it work?
Here a simplified response. First, both procedures try to reduce the AIC of a given model, but they do it in different ways. Then, the basic difference is that in the backward selection procedure you can only discard variables from the model at any step, whereas in stepwise selection you can also add variables to the m...
Stepwise regression in R - How does it work?
Here a simplified response. First, both procedures try to reduce the AIC of a given model, but they do it in different ways. Then, the basic difference is that in the backward selection procedure you
Stepwise regression in R - How does it work? Here a simplified response. First, both procedures try to reduce the AIC of a given model, but they do it in different ways. Then, the basic difference is that in the backward selection procedure you can only discard variables from the model at any step, whereas in stepwise ...
Stepwise regression in R - How does it work? Here a simplified response. First, both procedures try to reduce the AIC of a given model, but they do it in different ways. Then, the basic difference is that in the backward selection procedure you
12,154
Non-normal distributions with zero skewness and zero excess kurtosis?
Yes, examples with skewness and excess kurtosis both zero are relatively easy to construct. (Indeed examples (a) to (d) below also have Pearson mean-median skewness 0) (a) For example, in this answer an example is given by taking a 50-50 mixture of a gamma variate, (which I call $X$), and the negative of a second one, ...
Non-normal distributions with zero skewness and zero excess kurtosis?
Yes, examples with skewness and excess kurtosis both zero are relatively easy to construct. (Indeed examples (a) to (d) below also have Pearson mean-median skewness 0) (a) For example, in this answer
Non-normal distributions with zero skewness and zero excess kurtosis? Yes, examples with skewness and excess kurtosis both zero are relatively easy to construct. (Indeed examples (a) to (d) below also have Pearson mean-median skewness 0) (a) For example, in this answer an example is given by taking a 50-50 mixture of a...
Non-normal distributions with zero skewness and zero excess kurtosis? Yes, examples with skewness and excess kurtosis both zero are relatively easy to construct. (Indeed examples (a) to (d) below also have Pearson mean-median skewness 0) (a) For example, in this answer
12,155
Non-normal distributions with zero skewness and zero excess kurtosis?
Good points are made by Glen_b. I would only add consideration of the Dirac Delta function as additional grist for the mill. As Wikipedia notes, "The DDF is a generalized function, or distribution, on the real number line that is zero everywhere except at zero, with an integral of one over the entire real line" with th...
Non-normal distributions with zero skewness and zero excess kurtosis?
Good points are made by Glen_b. I would only add consideration of the Dirac Delta function as additional grist for the mill. As Wikipedia notes, "The DDF is a generalized function, or distribution, on
Non-normal distributions with zero skewness and zero excess kurtosis? Good points are made by Glen_b. I would only add consideration of the Dirac Delta function as additional grist for the mill. As Wikipedia notes, "The DDF is a generalized function, or distribution, on the real number line that is zero everywhere exce...
Non-normal distributions with zero skewness and zero excess kurtosis? Good points are made by Glen_b. I would only add consideration of the Dirac Delta function as additional grist for the mill. As Wikipedia notes, "The DDF is a generalized function, or distribution, on
12,156
Why is chi square used when creating a confidence interval for the variance?
Quick answer The reason is because, assuming the data are i.i.d. and $X_i\sim N(\mu,\sigma^2)$, and defining \begin{eqnarray*} \bar{X}&=&\sum^N \frac{X_i}{N}\\ S^2 &=& \sum^{N} \frac{(\bar{X}-X_i)^2}{N-1} \end{eqnarray*} when forming confidence intervals, the sampling distribution associated with the sample variance ($...
Why is chi square used when creating a confidence interval for the variance?
Quick answer The reason is because, assuming the data are i.i.d. and $X_i\sim N(\mu,\sigma^2)$, and defining \begin{eqnarray*} \bar{X}&=&\sum^N \frac{X_i}{N}\\ S^2 &=& \sum^{N} \frac{(\bar{X}-X_i)^2}{
Why is chi square used when creating a confidence interval for the variance? Quick answer The reason is because, assuming the data are i.i.d. and $X_i\sim N(\mu,\sigma^2)$, and defining \begin{eqnarray*} \bar{X}&=&\sum^N \frac{X_i}{N}\\ S^2 &=& \sum^{N} \frac{(\bar{X}-X_i)^2}{N-1} \end{eqnarray*} when forming confidenc...
Why is chi square used when creating a confidence interval for the variance? Quick answer The reason is because, assuming the data are i.i.d. and $X_i\sim N(\mu,\sigma^2)$, and defining \begin{eqnarray*} \bar{X}&=&\sum^N \frac{X_i}{N}\\ S^2 &=& \sum^{N} \frac{(\bar{X}-X_i)^2}{
12,157
Statistical intuition/data sense
I would first say that we shouldn't slight mathematics. It is an important tool in the development of statistical theory and statistical methods are justified by theory. Theory also tells you what is wrong and what techniuqes might be better (e.g. more efficient). So I think mathematical knowledge and thinking is imp...
Statistical intuition/data sense
I would first say that we shouldn't slight mathematics. It is an important tool in the development of statistical theory and statistical methods are justified by theory. Theory also tells you what i
Statistical intuition/data sense I would first say that we shouldn't slight mathematics. It is an important tool in the development of statistical theory and statistical methods are justified by theory. Theory also tells you what is wrong and what techniuqes might be better (e.g. more efficient). So I think mathemati...
Statistical intuition/data sense I would first say that we shouldn't slight mathematics. It is an important tool in the development of statistical theory and statistical methods are justified by theory. Theory also tells you what i
12,158
Statistical intuition/data sense
In the example you mention, the core issue is causal inference. A good place to start for causal inference is this triple-book-review by Andrew Gelman, and the books reviewed therein. In addition to learning about causal inference, you should learn about the value of exploratory data analysis, description, and predic...
Statistical intuition/data sense
In the example you mention, the core issue is causal inference. A good place to start for causal inference is this triple-book-review by Andrew Gelman, and the books reviewed therein. In addition to
Statistical intuition/data sense In the example you mention, the core issue is causal inference. A good place to start for causal inference is this triple-book-review by Andrew Gelman, and the books reviewed therein. In addition to learning about causal inference, you should learn about the value of exploratory data ...
Statistical intuition/data sense In the example you mention, the core issue is causal inference. A good place to start for causal inference is this triple-book-review by Andrew Gelman, and the books reviewed therein. In addition to
12,159
Statistical intuition/data sense
A nice, free resource is the Chance News Wiki. It has many examples pulled from real examples along with discussion of good and bad points in how people interpret data and statistics. Often there are discussion questions as well (part of the motivation of the sight is to give teachers of statistics real world example...
Statistical intuition/data sense
A nice, free resource is the Chance News Wiki. It has many examples pulled from real examples along with discussion of good and bad points in how people interpret data and statistics. Often there ar
Statistical intuition/data sense A nice, free resource is the Chance News Wiki. It has many examples pulled from real examples along with discussion of good and bad points in how people interpret data and statistics. Often there are discussion questions as well (part of the motivation of the sight is to give teachers...
Statistical intuition/data sense A nice, free resource is the Chance News Wiki. It has many examples pulled from real examples along with discussion of good and bad points in how people interpret data and statistics. Often there ar
12,160
Statistical intuition/data sense
+1 for a great question! (And +1 to all the answerers thus far.) I think there very much is such a thing as data sense, but I don't think there's anything mystical to it. The analogy I would use is to driving. When you are driving down the road, you just know what is going on with the other cars. For example, you...
Statistical intuition/data sense
+1 for a great question! (And +1 to all the answerers thus far.) I think there very much is such a thing as data sense, but I don't think there's anything mystical to it. The analogy I would use i
Statistical intuition/data sense +1 for a great question! (And +1 to all the answerers thus far.) I think there very much is such a thing as data sense, but I don't think there's anything mystical to it. The analogy I would use is to driving. When you are driving down the road, you just know what is going on with ...
Statistical intuition/data sense +1 for a great question! (And +1 to all the answerers thus far.) I think there very much is such a thing as data sense, but I don't think there's anything mystical to it. The analogy I would use i
12,161
Caret re-sampling methods
Ok, here is my try: boot - bootstrap boot632 -- 0.632 bootstrap cv -- cross-validation, probably this refers to K-fold cross-validation. LOOCV -- leave-one-out cross validation, also known as jacknife. LGOCV -- leave-group-out cross validation, variant of LOOCV for hierarchical data. repeatedcv -- is probably repeated...
Caret re-sampling methods
Ok, here is my try: boot - bootstrap boot632 -- 0.632 bootstrap cv -- cross-validation, probably this refers to K-fold cross-validation. LOOCV -- leave-one-out cross validation, also known as jacknif
Caret re-sampling methods Ok, here is my try: boot - bootstrap boot632 -- 0.632 bootstrap cv -- cross-validation, probably this refers to K-fold cross-validation. LOOCV -- leave-one-out cross validation, also known as jacknife. LGOCV -- leave-group-out cross validation, variant of LOOCV for hierarchical data. repeated...
Caret re-sampling methods Ok, here is my try: boot - bootstrap boot632 -- 0.632 bootstrap cv -- cross-validation, probably this refers to K-fold cross-validation. LOOCV -- leave-one-out cross validation, also known as jacknif
12,162
Caret re-sampling methods
The repeatedcv is repeated 10–fold cross–validation for sure, according to Max Kuhn's presentation. The default resampling scheme is the bootstrap. A good file that you can look about resampling methods is Predictive Modeling with R and the caret Package (pdf). Max presented this in "useR! 2013".
Caret re-sampling methods
The repeatedcv is repeated 10–fold cross–validation for sure, according to Max Kuhn's presentation. The default resampling scheme is the bootstrap. A good file that you can look about resampling met
Caret re-sampling methods The repeatedcv is repeated 10–fold cross–validation for sure, according to Max Kuhn's presentation. The default resampling scheme is the bootstrap. A good file that you can look about resampling methods is Predictive Modeling with R and the caret Package (pdf). Max presented this in "useR! 2...
Caret re-sampling methods The repeatedcv is repeated 10–fold cross–validation for sure, according to Max Kuhn's presentation. The default resampling scheme is the bootstrap. A good file that you can look about resampling met
12,163
Can a small sample size cause type 1 error?
As a general principle, small sample size will not increase the Type I error rate for the simple reason that the test is arranged to control the Type I rate. (There are minor technical exceptions associated with discrete outcomes, which can cause the nominal Type I rate not to be achieved exactly especially with small...
Can a small sample size cause type 1 error?
As a general principle, small sample size will not increase the Type I error rate for the simple reason that the test is arranged to control the Type I rate. (There are minor technical exceptions ass
Can a small sample size cause type 1 error? As a general principle, small sample size will not increase the Type I error rate for the simple reason that the test is arranged to control the Type I rate. (There are minor technical exceptions associated with discrete outcomes, which can cause the nominal Type I rate not ...
Can a small sample size cause type 1 error? As a general principle, small sample size will not increase the Type I error rate for the simple reason that the test is arranged to control the Type I rate. (There are minor technical exceptions ass
12,164
Can a small sample size cause type 1 error?
Another consequence of a small sample is the increase of type 2 error. Nunnally demonstrated in the paper "The place of statistics in psychology", 1960, that small samples generally fail to reject a point null hypothesis. These hypothesis are hypothesis having some parameters equals zero, and are known to be false in t...
Can a small sample size cause type 1 error?
Another consequence of a small sample is the increase of type 2 error. Nunnally demonstrated in the paper "The place of statistics in psychology", 1960, that small samples generally fail to reject a p
Can a small sample size cause type 1 error? Another consequence of a small sample is the increase of type 2 error. Nunnally demonstrated in the paper "The place of statistics in psychology", 1960, that small samples generally fail to reject a point null hypothesis. These hypothesis are hypothesis having some parameters...
Can a small sample size cause type 1 error? Another consequence of a small sample is the increase of type 2 error. Nunnally demonstrated in the paper "The place of statistics in psychology", 1960, that small samples generally fail to reject a p
12,165
How is the .similarity method in SpaCy computed?
Found the answer, in short, it's yes: Link to Source Code return numpy.dot(self.vector, other.vector) / (self.vector_norm * other.vector_norm) This looks like it's the formula for computing cosine similarity and the vectors seem to be created with SpaCy's .vector which the documentation says is trained from GloVe's w2...
How is the .similarity method in SpaCy computed?
Found the answer, in short, it's yes: Link to Source Code return numpy.dot(self.vector, other.vector) / (self.vector_norm * other.vector_norm) This looks like it's the formula for computing cosine si
How is the .similarity method in SpaCy computed? Found the answer, in short, it's yes: Link to Source Code return numpy.dot(self.vector, other.vector) / (self.vector_norm * other.vector_norm) This looks like it's the formula for computing cosine similarity and the vectors seem to be created with SpaCy's .vector which ...
How is the .similarity method in SpaCy computed? Found the answer, in short, it's yes: Link to Source Code return numpy.dot(self.vector, other.vector) / (self.vector_norm * other.vector_norm) This looks like it's the formula for computing cosine si
12,166
How is the .similarity method in SpaCy computed?
By default it's cosine similarity, with vectors averaged over the document for missing words. You can also customize this, by setting a hook to doc.user_hooks['similarity']. This pipeline component wraps similarity functions, making it easy to customise the similarity: https://github.com/explosion/spaCy/blob/develop/sp...
How is the .similarity method in SpaCy computed?
By default it's cosine similarity, with vectors averaged over the document for missing words. You can also customize this, by setting a hook to doc.user_hooks['similarity']. This pipeline component wr
How is the .similarity method in SpaCy computed? By default it's cosine similarity, with vectors averaged over the document for missing words. You can also customize this, by setting a hook to doc.user_hooks['similarity']. This pipeline component wraps similarity functions, making it easy to customise the similarity: h...
How is the .similarity method in SpaCy computed? By default it's cosine similarity, with vectors averaged over the document for missing words. You can also customize this, by setting a hook to doc.user_hooks['similarity']. This pipeline component wr
12,167
How to calculate number of features based on image resolution?
Perhaps a simpler case will make things clearer. Lets say we choose a 1x2 sample of pixels instead of 100x100. Sample Pixels From the Image +----+----+ | x1 | x2 | +----+----+ Imagine when plotting our training set, we noticed that it can't be separated easily with a linear model, so we choose to add polynomial terms ...
How to calculate number of features based on image resolution?
Perhaps a simpler case will make things clearer. Lets say we choose a 1x2 sample of pixels instead of 100x100. Sample Pixels From the Image +----+----+ | x1 | x2 | +----+----+ Imagine when plotting o
How to calculate number of features based on image resolution? Perhaps a simpler case will make things clearer. Lets say we choose a 1x2 sample of pixels instead of 100x100. Sample Pixels From the Image +----+----+ | x1 | x2 | +----+----+ Imagine when plotting our training set, we noticed that it can't be separated ea...
How to calculate number of features based on image resolution? Perhaps a simpler case will make things clearer. Lets say we choose a 1x2 sample of pixels instead of 100x100. Sample Pixels From the Image +----+----+ | x1 | x2 | +----+----+ Imagine when plotting o
12,168
How to calculate number of features based on image resolution?
If you are using all the linear and quadratic features, the total number is supposed to be: 100*100 + 100*100 + C(100*100,2) = 50015000 10000 + 10000 + 49995000 = 50015000 xi xi^2 xixj To calculate the combination in Octave/Matlab, octave:23> nchoosek(100*100,2) ans = 49995000
How to calculate number of features based on image resolution?
If you are using all the linear and quadratic features, the total number is supposed to be: 100*100 + 100*100 + C(100*100,2) = 50015000 10000 + 10000 + 49995000 = 50015000 xi xi^2
How to calculate number of features based on image resolution? If you are using all the linear and quadratic features, the total number is supposed to be: 100*100 + 100*100 + C(100*100,2) = 50015000 10000 + 10000 + 49995000 = 50015000 xi xi^2 xixj To calculate the combination in Octave/Matlab, oc...
How to calculate number of features based on image resolution? If you are using all the linear and quadratic features, the total number is supposed to be: 100*100 + 100*100 + C(100*100,2) = 50015000 10000 + 10000 + 49995000 = 50015000 xi xi^2
12,169
How to calculate number of features based on image resolution?
The idea of ($x^2$)/2 might also work to obtain the quadratic features. So if n = 2500, then we know that x(i) = 2500 and substituting x in the formula will give 50 million
How to calculate number of features based on image resolution?
The idea of ($x^2$)/2 might also work to obtain the quadratic features. So if n = 2500, then we know that x(i) = 2500 and substituting x in the formula will give 50 million
How to calculate number of features based on image resolution? The idea of ($x^2$)/2 might also work to obtain the quadratic features. So if n = 2500, then we know that x(i) = 2500 and substituting x in the formula will give 50 million
How to calculate number of features based on image resolution? The idea of ($x^2$)/2 might also work to obtain the quadratic features. So if n = 2500, then we know that x(i) = 2500 and substituting x in the formula will give 50 million
12,170
Difference between multivariate standard normal distribution and Gaussian copula
One general rule about technical papers--especially those found on the Web--is that the reliability of any statistical or mathematical definition offered in them varies inversely with the number of unrelated non-statistical subjects mentioned in the paper's title. The page title in the first reference offered (in a co...
Difference between multivariate standard normal distribution and Gaussian copula
One general rule about technical papers--especially those found on the Web--is that the reliability of any statistical or mathematical definition offered in them varies inversely with the number of un
Difference between multivariate standard normal distribution and Gaussian copula One general rule about technical papers--especially those found on the Web--is that the reliability of any statistical or mathematical definition offered in them varies inversely with the number of unrelated non-statistical subjects mentio...
Difference between multivariate standard normal distribution and Gaussian copula One general rule about technical papers--especially those found on the Web--is that the reliability of any statistical or mathematical definition offered in them varies inversely with the number of un
12,171
Regression for a model of form $y=ax^k$?
Your example is a very good one because it clearly points up recurrent issues with such data. Two common names are power function and power law. In biology, and some other fields, people often talk of allometry, especially whenever you are relating size measurements. In physics, and some other fields, people talk of sc...
Regression for a model of form $y=ax^k$?
Your example is a very good one because it clearly points up recurrent issues with such data. Two common names are power function and power law. In biology, and some other fields, people often talk of
Regression for a model of form $y=ax^k$? Your example is a very good one because it clearly points up recurrent issues with such data. Two common names are power function and power law. In biology, and some other fields, people often talk of allometry, especially whenever you are relating size measurements. In physics,...
Regression for a model of form $y=ax^k$? Your example is a very good one because it clearly points up recurrent issues with such data. Two common names are power function and power law. In biology, and some other fields, people often talk of
12,172
Regression for a model of form $y=ax^k$?
If you assume that a power is a good model to fit, then you can use log(y) ~ log(x) as your model, and fit a linear regression using lm(): Try this: # Generate some data set.seed(42) x <- seq(1, 10, 1) a = 10 b = 2 scatt <- rnorm(10, sd = 0.2) dat <- data.frame( x = x, y = a*x^(-b) + scatt ) Fit a model: # Fit...
Regression for a model of form $y=ax^k$?
If you assume that a power is a good model to fit, then you can use log(y) ~ log(x) as your model, and fit a linear regression using lm(): Try this: # Generate some data set.seed(42) x <- seq(1, 10,
Regression for a model of form $y=ax^k$? If you assume that a power is a good model to fit, then you can use log(y) ~ log(x) as your model, and fit a linear regression using lm(): Try this: # Generate some data set.seed(42) x <- seq(1, 10, 1) a = 10 b = 2 scatt <- rnorm(10, sd = 0.2) dat <- data.frame( x = x, y...
Regression for a model of form $y=ax^k$? If you assume that a power is a good model to fit, then you can use log(y) ~ log(x) as your model, and fit a linear regression using lm(): Try this: # Generate some data set.seed(42) x <- seq(1, 10,
12,173
Difference in using normalized gradient and gradient
In a gradient descent algorithm, the algorithm proceeds by finding a direction along which you can find the optimal solution. The optimal direction turns out to be the gradient. However, since we are only interested in the direction and not necessarily how far we move along that direction, we are usually not interested...
Difference in using normalized gradient and gradient
In a gradient descent algorithm, the algorithm proceeds by finding a direction along which you can find the optimal solution. The optimal direction turns out to be the gradient. However, since we are
Difference in using normalized gradient and gradient In a gradient descent algorithm, the algorithm proceeds by finding a direction along which you can find the optimal solution. The optimal direction turns out to be the gradient. However, since we are only interested in the direction and not necessarily how far we mo...
Difference in using normalized gradient and gradient In a gradient descent algorithm, the algorithm proceeds by finding a direction along which you can find the optimal solution. The optimal direction turns out to be the gradient. However, since we are
12,174
Difference in using normalized gradient and gradient
Which method has faster convergence will depend on your specific objective, and generally I use the normalized gradient. A good example of why you might want to do this is a simple quadratic: $f(x) = x^Tx$. In this case the ODE that describes a given gradient descent trajectory (as step sizes approaches zero) can be ...
Difference in using normalized gradient and gradient
Which method has faster convergence will depend on your specific objective, and generally I use the normalized gradient. A good example of why you might want to do this is a simple quadratic: $f(x) =
Difference in using normalized gradient and gradient Which method has faster convergence will depend on your specific objective, and generally I use the normalized gradient. A good example of why you might want to do this is a simple quadratic: $f(x) = x^Tx$. In this case the ODE that describes a given gradient desc...
Difference in using normalized gradient and gradient Which method has faster convergence will depend on your specific objective, and generally I use the normalized gradient. A good example of why you might want to do this is a simple quadratic: $f(x) =
12,175
Difference in using normalized gradient and gradient
What really matters is how $\eta$ is selected. It doesn't matter whether you use the normalized gradient or the unnormalized gradient if the step size is selected in a way that makes the length of $\eta$ times the gradient the same.
Difference in using normalized gradient and gradient
What really matters is how $\eta$ is selected. It doesn't matter whether you use the normalized gradient or the unnormalized gradient if the step size is selected in a way that makes the length of $\
Difference in using normalized gradient and gradient What really matters is how $\eta$ is selected. It doesn't matter whether you use the normalized gradient or the unnormalized gradient if the step size is selected in a way that makes the length of $\eta$ times the gradient the same.
Difference in using normalized gradient and gradient What really matters is how $\eta$ is selected. It doesn't matter whether you use the normalized gradient or the unnormalized gradient if the step size is selected in a way that makes the length of $\
12,176
How to interpret coefficients from a logistic regression?
If you're fitting a binomial GLM with a logit link (i.e. a logistic regression model), then your regression equation is the log odds that the response value is a '1' (or a 'success'), conditioned on the predictor values. Exponentiating the log odds gives you the odds ratio for a one-unit increase in your variable. So ...
How to interpret coefficients from a logistic regression?
If you're fitting a binomial GLM with a logit link (i.e. a logistic regression model), then your regression equation is the log odds that the response value is a '1' (or a 'success'), conditioned on t
How to interpret coefficients from a logistic regression? If you're fitting a binomial GLM with a logit link (i.e. a logistic regression model), then your regression equation is the log odds that the response value is a '1' (or a 'success'), conditioned on the predictor values. Exponentiating the log odds gives you th...
How to interpret coefficients from a logistic regression? If you're fitting a binomial GLM with a logit link (i.e. a logistic regression model), then your regression equation is the log odds that the response value is a '1' (or a 'success'), conditioned on t
12,177
How to interpret coefficients from a logistic regression?
the odds ratio of women should be 1 / exp(0.014) explanation: since the event for male is '1' and female is '0' that means the reference level is female. the equation ln(s) = B0 + B1*(gender) odds(female) = exp(B0) odds(male) = exp(B0 + B1 * 1) odds ratio(male) = odds(male) / odds(female) = exp(0.014) = 1.01 theref...
How to interpret coefficients from a logistic regression?
the odds ratio of women should be 1 / exp(0.014) explanation: since the event for male is '1' and female is '0' that means the reference level is female. the equation ln(s) = B0 + B1*(gender) odds(fem
How to interpret coefficients from a logistic regression? the odds ratio of women should be 1 / exp(0.014) explanation: since the event for male is '1' and female is '0' that means the reference level is female. the equation ln(s) = B0 + B1*(gender) odds(female) = exp(B0) odds(male) = exp(B0 + B1 * 1) odds ratio(mal...
How to interpret coefficients from a logistic regression? the odds ratio of women should be 1 / exp(0.014) explanation: since the event for male is '1' and female is '0' that means the reference level is female. the equation ln(s) = B0 + B1*(gender) odds(fem
12,178
How to compute prediction bands for non-linear regression?
This is called the Delta Method. Suppose that you have some function $y = G(\beta,x) + \epsilon$; note that $G(\cdot)$ is a function of the parameters that you estimate, $\beta$, and the values of your predictors, $x$. First, find the derivative of this function with respect to your vector of parameters, $\beta$: $G^\...
How to compute prediction bands for non-linear regression?
This is called the Delta Method. Suppose that you have some function $y = G(\beta,x) + \epsilon$; note that $G(\cdot)$ is a function of the parameters that you estimate, $\beta$, and the values of yo
How to compute prediction bands for non-linear regression? This is called the Delta Method. Suppose that you have some function $y = G(\beta,x) + \epsilon$; note that $G(\cdot)$ is a function of the parameters that you estimate, $\beta$, and the values of your predictors, $x$. First, find the derivative of this functi...
How to compute prediction bands for non-linear regression? This is called the Delta Method. Suppose that you have some function $y = G(\beta,x) + \epsilon$; note that $G(\cdot)$ is a function of the parameters that you estimate, $\beta$, and the values of yo
12,179
Why do we make a big fuss about using Fisher scoring when we fit a GLM?
Fisher's scoring is just a version of Newton's method that happens to be identified with GLMs, there's nothing particularly special about it, other than the fact that the Fisher's information matrix happens to be rather easy to find for random variables in the exponential family. It also ties in to a lot of other math...
Why do we make a big fuss about using Fisher scoring when we fit a GLM?
Fisher's scoring is just a version of Newton's method that happens to be identified with GLMs, there's nothing particularly special about it, other than the fact that the Fisher's information matrix h
Why do we make a big fuss about using Fisher scoring when we fit a GLM? Fisher's scoring is just a version of Newton's method that happens to be identified with GLMs, there's nothing particularly special about it, other than the fact that the Fisher's information matrix happens to be rather easy to find for random vari...
Why do we make a big fuss about using Fisher scoring when we fit a GLM? Fisher's scoring is just a version of Newton's method that happens to be identified with GLMs, there's nothing particularly special about it, other than the fact that the Fisher's information matrix h
12,180
Why do we make a big fuss about using Fisher scoring when we fit a GLM?
It's historical, and pragmatic; Nelder and Wedderburn reverse-engineered GLMs, as the set of models where you can find the MLE using Fisher scoring (i.e. Iteratively ReWeighted Least Squares). The algorithm came before the models, at least in the general case. It's also worth remembering that IWLS was what they had ava...
Why do we make a big fuss about using Fisher scoring when we fit a GLM?
It's historical, and pragmatic; Nelder and Wedderburn reverse-engineered GLMs, as the set of models where you can find the MLE using Fisher scoring (i.e. Iteratively ReWeighted Least Squares). The alg
Why do we make a big fuss about using Fisher scoring when we fit a GLM? It's historical, and pragmatic; Nelder and Wedderburn reverse-engineered GLMs, as the set of models where you can find the MLE using Fisher scoring (i.e. Iteratively ReWeighted Least Squares). The algorithm came before the models, at least in the g...
Why do we make a big fuss about using Fisher scoring when we fit a GLM? It's historical, and pragmatic; Nelder and Wedderburn reverse-engineered GLMs, as the set of models where you can find the MLE using Fisher scoring (i.e. Iteratively ReWeighted Least Squares). The alg
12,181
New revolutionary way of data mining?
Does this make any sense? Partly. What does he mean? Please ask him. Do you have a clue - or perhaps even a name for the proposed method and some references? Cross Validation. http://en.wikipedia.org/wiki/Cross-validation_(statistics) Or did this guy find the holy grail nobody else understands? No. He even says in ...
New revolutionary way of data mining?
Does this make any sense? Partly. What does he mean? Please ask him. Do you have a clue - or perhaps even a name for the proposed method and some references? Cross Validation. http://en.wikipedia.
New revolutionary way of data mining? Does this make any sense? Partly. What does he mean? Please ask him. Do you have a clue - or perhaps even a name for the proposed method and some references? Cross Validation. http://en.wikipedia.org/wiki/Cross-validation_(statistics) Or did this guy find the holy grail nobody ...
New revolutionary way of data mining? Does this make any sense? Partly. What does he mean? Please ask him. Do you have a clue - or perhaps even a name for the proposed method and some references? Cross Validation. http://en.wikipedia.
12,182
New revolutionary way of data mining?
Not sure if there'll be any other "ranty" responses, but heres mine. Cross Validation is in no way "new". Additionally, Cross Validation is not used when analytic solutions are found. For example you don't use cross validation to estimate the betas, you use OLS or IRLS or some other "optimal" solution. What I see as a ...
New revolutionary way of data mining?
Not sure if there'll be any other "ranty" responses, but heres mine. Cross Validation is in no way "new". Additionally, Cross Validation is not used when analytic solutions are found. For example you
New revolutionary way of data mining? Not sure if there'll be any other "ranty" responses, but heres mine. Cross Validation is in no way "new". Additionally, Cross Validation is not used when analytic solutions are found. For example you don't use cross validation to estimate the betas, you use OLS or IRLS or some othe...
New revolutionary way of data mining? Not sure if there'll be any other "ranty" responses, but heres mine. Cross Validation is in no way "new". Additionally, Cross Validation is not used when analytic solutions are found. For example you
12,183
New revolutionary way of data mining?
You can look for patterns where, on average, all the models out-of-sample continue to do well. My understanding of the word patterns here, is he means different market conditions. A naive approach will analyse all available data (we all know more data is better), to train the best curve fitting model, then run it on...
New revolutionary way of data mining?
You can look for patterns where, on average, all the models out-of-sample continue to do well. My understanding of the word patterns here, is he means different market conditions. A naive approach
New revolutionary way of data mining? You can look for patterns where, on average, all the models out-of-sample continue to do well. My understanding of the word patterns here, is he means different market conditions. A naive approach will analyse all available data (we all know more data is better), to train the be...
New revolutionary way of data mining? You can look for patterns where, on average, all the models out-of-sample continue to do well. My understanding of the word patterns here, is he means different market conditions. A naive approach
12,184
New revolutionary way of data mining?
His explanation about a common error in data mining seems sensible. His explanation of what he does not make any sense. What does he mean when he says "Generally speaking, you are really getting somewhere if the out-of-sample results are more than 50 percent of the in-sample."? Then bad-mouthing SAS and IBM doesn't ...
New revolutionary way of data mining?
His explanation about a common error in data mining seems sensible. His explanation of what he does not make any sense. What does he mean when he says "Generally speaking, you are really getting som
New revolutionary way of data mining? His explanation about a common error in data mining seems sensible. His explanation of what he does not make any sense. What does he mean when he says "Generally speaking, you are really getting somewhere if the out-of-sample results are more than 50 percent of the in-sample."? ...
New revolutionary way of data mining? His explanation about a common error in data mining seems sensible. His explanation of what he does not make any sense. What does he mean when he says "Generally speaking, you are really getting som
12,185
New revolutionary way of data mining?
As a finance professional, I know enough context that the statement does not present any ambiguity. Financial time series are often characterized by regime changes, structural breaks, and concept drift, so cross-validation as practised in other industries is not as successful in financial applications. In the second pa...
New revolutionary way of data mining?
As a finance professional, I know enough context that the statement does not present any ambiguity. Financial time series are often characterized by regime changes, structural breaks, and concept drif
New revolutionary way of data mining? As a finance professional, I know enough context that the statement does not present any ambiguity. Financial time series are often characterized by regime changes, structural breaks, and concept drift, so cross-validation as practised in other industries is not as successful in fi...
New revolutionary way of data mining? As a finance professional, I know enough context that the statement does not present any ambiguity. Financial time series are often characterized by regime changes, structural breaks, and concept drif
12,186
Normalizing constant in Bayes theorem
The denominator, $\Pr(\textrm{data})$, is obtained by integrating out the parameters from the join probability, $\Pr(\textrm{data}, \textrm{parameters})$. This is the marginal probability of the data and, of course, it does not depend on the parameters since these have been integrated out. Now, since: $\Pr(\textrm{da...
Normalizing constant in Bayes theorem
The denominator, $\Pr(\textrm{data})$, is obtained by integrating out the parameters from the join probability, $\Pr(\textrm{data}, \textrm{parameters})$. This is the marginal probability of the data
Normalizing constant in Bayes theorem The denominator, $\Pr(\textrm{data})$, is obtained by integrating out the parameters from the join probability, $\Pr(\textrm{data}, \textrm{parameters})$. This is the marginal probability of the data and, of course, it does not depend on the parameters since these have been integra...
Normalizing constant in Bayes theorem The denominator, $\Pr(\textrm{data})$, is obtained by integrating out the parameters from the join probability, $\Pr(\textrm{data}, \textrm{parameters})$. This is the marginal probability of the data
12,187
Normalizing constant in Bayes theorem
When applying Bayes' rule, we usually wish to infer the "parameters" and the "data" is already given. Thus, $\Pr(\textrm{data})$ is a constant and we can assume that it is just a normalizing factor.
Normalizing constant in Bayes theorem
When applying Bayes' rule, we usually wish to infer the "parameters" and the "data" is already given. Thus, $\Pr(\textrm{data})$ is a constant and we can assume that it is just a normalizing factor.
Normalizing constant in Bayes theorem When applying Bayes' rule, we usually wish to infer the "parameters" and the "data" is already given. Thus, $\Pr(\textrm{data})$ is a constant and we can assume that it is just a normalizing factor.
Normalizing constant in Bayes theorem When applying Bayes' rule, we usually wish to infer the "parameters" and the "data" is already given. Thus, $\Pr(\textrm{data})$ is a constant and we can assume that it is just a normalizing factor.
12,188
Normalizing constant in Bayes theorem
Most explanations of Bayes miss the mark. Consider the following for the role of Pr(B). The crux of Bayes is the "update factor" $[Pr(B|A) / Pr(B)]$. This is the transformation applied to the prior. If B always occurs in all states of the world, there is no information content & the update factor is 1. In this case, $...
Normalizing constant in Bayes theorem
Most explanations of Bayes miss the mark. Consider the following for the role of Pr(B). The crux of Bayes is the "update factor" $[Pr(B|A) / Pr(B)]$. This is the transformation applied to the prior.
Normalizing constant in Bayes theorem Most explanations of Bayes miss the mark. Consider the following for the role of Pr(B). The crux of Bayes is the "update factor" $[Pr(B|A) / Pr(B)]$. This is the transformation applied to the prior. If B always occurs in all states of the world, there is no information content & t...
Normalizing constant in Bayes theorem Most explanations of Bayes miss the mark. Consider the following for the role of Pr(B). The crux of Bayes is the "update factor" $[Pr(B|A) / Pr(B)]$. This is the transformation applied to the prior.
12,189
Measuring Document Similarity
For text documents, the feature vectors can be very high dimensional and sparse under any of the standard representations (bag of words or TF-IDF etc). Measuring distances directly under such a representation may not be reliable since it is a known fact that in very high dimensions, distance between any two points star...
Measuring Document Similarity
For text documents, the feature vectors can be very high dimensional and sparse under any of the standard representations (bag of words or TF-IDF etc). Measuring distances directly under such a repres
Measuring Document Similarity For text documents, the feature vectors can be very high dimensional and sparse under any of the standard representations (bag of words or TF-IDF etc). Measuring distances directly under such a representation may not be reliable since it is a known fact that in very high dimensions, distan...
Measuring Document Similarity For text documents, the feature vectors can be very high dimensional and sparse under any of the standard representations (bag of words or TF-IDF etc). Measuring distances directly under such a repres
12,190
Measuring Document Similarity
You might want to try this online service for cosine document similarity http://www.scurtu.it/documentSimilarity.html import urllib,urllib2 import json API_URL="http://www.scurtu.it/apis/documentSimilarity" inputDict={} inputDict['doc1']='Document with some text' inputDict['doc2']='Other document with some text' params...
Measuring Document Similarity
You might want to try this online service for cosine document similarity http://www.scurtu.it/documentSimilarity.html import urllib,urllib2 import json API_URL="http://www.scurtu.it/apis/documentSimil
Measuring Document Similarity You might want to try this online service for cosine document similarity http://www.scurtu.it/documentSimilarity.html import urllib,urllib2 import json API_URL="http://www.scurtu.it/apis/documentSimilarity" inputDict={} inputDict['doc1']='Document with some text' inputDict['doc2']='Other d...
Measuring Document Similarity You might want to try this online service for cosine document similarity http://www.scurtu.it/documentSimilarity.html import urllib,urllib2 import json API_URL="http://www.scurtu.it/apis/documentSimil
12,191
Cauchy Distribution and Central Limit Theorem
The distribution of the mean of $n$ i.i.d. samples from a Cauchy distribution has the same distribution (including the same median and inter-quartile range) as the original Cauchy distribution, no matter what the value of $n$ is. So you do not get either the Gaussian limit or the reduction in dispersion associated wi...
Cauchy Distribution and Central Limit Theorem
The distribution of the mean of $n$ i.i.d. samples from a Cauchy distribution has the same distribution (including the same median and inter-quartile range) as the original Cauchy distribution, no mat
Cauchy Distribution and Central Limit Theorem The distribution of the mean of $n$ i.i.d. samples from a Cauchy distribution has the same distribution (including the same median and inter-quartile range) as the original Cauchy distribution, no matter what the value of $n$ is. So you do not get either the Gaussian limi...
Cauchy Distribution and Central Limit Theorem The distribution of the mean of $n$ i.i.d. samples from a Cauchy distribution has the same distribution (including the same median and inter-quartile range) as the original Cauchy distribution, no mat
12,192
What common forecasting models can be seen as special cases of ARIMA models?
The Box-Jenkins approach incorporates all well-known forecasting models except multiplicative models like the Holt-Winston Multiplicative Seasonal Model where the expected value is based upon a multiplicand. The multiplicative seasonal model can be used to model time series where one has the following (in my opinion a ...
What common forecasting models can be seen as special cases of ARIMA models?
The Box-Jenkins approach incorporates all well-known forecasting models except multiplicative models like the Holt-Winston Multiplicative Seasonal Model where the expected value is based upon a multip
What common forecasting models can be seen as special cases of ARIMA models? The Box-Jenkins approach incorporates all well-known forecasting models except multiplicative models like the Holt-Winston Multiplicative Seasonal Model where the expected value is based upon a multiplicand. The multiplicative seasonal model c...
What common forecasting models can be seen as special cases of ARIMA models? The Box-Jenkins approach incorporates all well-known forecasting models except multiplicative models like the Holt-Winston Multiplicative Seasonal Model where the expected value is based upon a multip
12,193
What common forecasting models can be seen as special cases of ARIMA models?
You can add Drift: ARIMA(0,1,0) with constant. Damped Holt's: ARIMA(0,1,2) Additive Holt-Winters: SARIMA(0,1,$m+1$)(0,1,0)$_m$. However, HW uses only three parameters and that (rather strange) ARIMA model has $m+1$ parameters. So there are a lot of parameter constraints. The ETS (exponential smoothing) and ARIMA class...
What common forecasting models can be seen as special cases of ARIMA models?
You can add Drift: ARIMA(0,1,0) with constant. Damped Holt's: ARIMA(0,1,2) Additive Holt-Winters: SARIMA(0,1,$m+1$)(0,1,0)$_m$. However, HW uses only three parameters and that (rather strange) ARIMA
What common forecasting models can be seen as special cases of ARIMA models? You can add Drift: ARIMA(0,1,0) with constant. Damped Holt's: ARIMA(0,1,2) Additive Holt-Winters: SARIMA(0,1,$m+1$)(0,1,0)$_m$. However, HW uses only three parameters and that (rather strange) ARIMA model has $m+1$ parameters. So there are a ...
What common forecasting models can be seen as special cases of ARIMA models? You can add Drift: ARIMA(0,1,0) with constant. Damped Holt's: ARIMA(0,1,2) Additive Holt-Winters: SARIMA(0,1,$m+1$)(0,1,0)$_m$. However, HW uses only three parameters and that (rather strange) ARIMA
12,194
What common forecasting models can be seen as special cases of ARIMA models?
The exponentially weighted moving average (EWMA) is algebraically equivalent to an ARIMA(0,1,1) model. To put it another way, the EWMA is a particular model within the class of ARIMA models. In fact, there are various types of EWMA models and these happen to be included in the class of ARIMA(0,d,q) models - see Cogge...
What common forecasting models can be seen as special cases of ARIMA models?
The exponentially weighted moving average (EWMA) is algebraically equivalent to an ARIMA(0,1,1) model. To put it another way, the EWMA is a particular model within the class of ARIMA models. In fact
What common forecasting models can be seen as special cases of ARIMA models? The exponentially weighted moving average (EWMA) is algebraically equivalent to an ARIMA(0,1,1) model. To put it another way, the EWMA is a particular model within the class of ARIMA models. In fact, there are various types of EWMA models an...
What common forecasting models can be seen as special cases of ARIMA models? The exponentially weighted moving average (EWMA) is algebraically equivalent to an ARIMA(0,1,1) model. To put it another way, the EWMA is a particular model within the class of ARIMA models. In fact
12,195
What common forecasting models can be seen as special cases of ARIMA models?
"The Gauss-Markov plus white noise model of the first difference is a special case of an ARIMA (1,1,1) and the damped cosine plus white noise model is a special case of an ARIMA (2,1,2)." https://lib.dr.iastate.edu/cgi/viewcontent.cgi?article=9290&context=rtd
What common forecasting models can be seen as special cases of ARIMA models?
"The Gauss-Markov plus white noise model of the first difference is a special case of an ARIMA (1,1,1) and the damped cosine plus white noise model is a special case of an ARIMA (2,1,2)." https://lib.
What common forecasting models can be seen as special cases of ARIMA models? "The Gauss-Markov plus white noise model of the first difference is a special case of an ARIMA (1,1,1) and the damped cosine plus white noise model is a special case of an ARIMA (2,1,2)." https://lib.dr.iastate.edu/cgi/viewcontent.cgi?article=...
What common forecasting models can be seen as special cases of ARIMA models? "The Gauss-Markov plus white noise model of the first difference is a special case of an ARIMA (1,1,1) and the damped cosine plus white noise model is a special case of an ARIMA (2,1,2)." https://lib.
12,196
When should one use multiple regression with dummy coding vs. ANCOVA?
ttnphns is correct. However, given your additional comments I would suggest that the reviewer wanted the change merely for interpretation. If you want to stick with ANOVA style results just call it ANOVA. ANCOVA and ANOVA are the same, as ttnphns pointed out. The difference is that with ANCOVA you don't treat the ...
When should one use multiple regression with dummy coding vs. ANCOVA?
ttnphns is correct. However, given your additional comments I would suggest that the reviewer wanted the change merely for interpretation. If you want to stick with ANOVA style results just call it
When should one use multiple regression with dummy coding vs. ANCOVA? ttnphns is correct. However, given your additional comments I would suggest that the reviewer wanted the change merely for interpretation. If you want to stick with ANOVA style results just call it ANOVA. ANCOVA and ANOVA are the same, as ttnphns...
When should one use multiple regression with dummy coding vs. ANCOVA? ttnphns is correct. However, given your additional comments I would suggest that the reviewer wanted the change merely for interpretation. If you want to stick with ANOVA style results just call it
12,197
When should one use multiple regression with dummy coding vs. ANCOVA?
These two are the same thing. For example, in SPSS the procedure where I specify ANCOVA is called GLM (general linear model); it askes to input "factors" (categorical predictors) and "covariates" (continuous predictors). If I recode the "factors" into dummy variables (omitting one redundant category from each factor) a...
When should one use multiple regression with dummy coding vs. ANCOVA?
These two are the same thing. For example, in SPSS the procedure where I specify ANCOVA is called GLM (general linear model); it askes to input "factors" (categorical predictors) and "covariates" (con
When should one use multiple regression with dummy coding vs. ANCOVA? These two are the same thing. For example, in SPSS the procedure where I specify ANCOVA is called GLM (general linear model); it askes to input "factors" (categorical predictors) and "covariates" (continuous predictors). If I recode the "factors" int...
When should one use multiple regression with dummy coding vs. ANCOVA? These two are the same thing. For example, in SPSS the procedure where I specify ANCOVA is called GLM (general linear model); it askes to input "factors" (categorical predictors) and "covariates" (con
12,198
When should one use multiple regression with dummy coding vs. ANCOVA?
ANCOVA is a form of regression but not identical to other multiple regression techniques. SPSS is not robust enough software to trust in anything outside of some psychology research. Within econometrics, biology, chemistry, physics, and finance SPSS is not accurate or useful in general. Even within psychology, SPSS pre...
When should one use multiple regression with dummy coding vs. ANCOVA?
ANCOVA is a form of regression but not identical to other multiple regression techniques. SPSS is not robust enough software to trust in anything outside of some psychology research. Within econometri
When should one use multiple regression with dummy coding vs. ANCOVA? ANCOVA is a form of regression but not identical to other multiple regression techniques. SPSS is not robust enough software to trust in anything outside of some psychology research. Within econometrics, biology, chemistry, physics, and finance SPSS ...
When should one use multiple regression with dummy coding vs. ANCOVA? ANCOVA is a form of regression but not identical to other multiple regression techniques. SPSS is not robust enough software to trust in anything outside of some psychology research. Within econometri
12,199
When should one use multiple regression with dummy coding vs. ANCOVA?
Multiple linear regression appears to me more appropriate than ANCOVA in this situation, as the journal reviewer recommends. Try running both a multiple regression and an ANCOVA, and comparing the results. They probably will not be identical. ANCOVA and multiple linear regression are similar, but regression is more ap...
When should one use multiple regression with dummy coding vs. ANCOVA?
Multiple linear regression appears to me more appropriate than ANCOVA in this situation, as the journal reviewer recommends. Try running both a multiple regression and an ANCOVA, and comparing the res
When should one use multiple regression with dummy coding vs. ANCOVA? Multiple linear regression appears to me more appropriate than ANCOVA in this situation, as the journal reviewer recommends. Try running both a multiple regression and an ANCOVA, and comparing the results. They probably will not be identical. ANCOVA...
When should one use multiple regression with dummy coding vs. ANCOVA? Multiple linear regression appears to me more appropriate than ANCOVA in this situation, as the journal reviewer recommends. Try running both a multiple regression and an ANCOVA, and comparing the res
12,200
Fitting an exponential model to data
I am not completely sure what you're asking, because your lingo is off. But assuming that your variables aren't independent of one another (if they were, then they're be no relation to find) I'll give it a try. If x is your independent (or predictor) variable and y is your dependent (or response) variable, then this sh...
Fitting an exponential model to data
I am not completely sure what you're asking, because your lingo is off. But assuming that your variables aren't independent of one another (if they were, then they're be no relation to find) I'll give
Fitting an exponential model to data I am not completely sure what you're asking, because your lingo is off. But assuming that your variables aren't independent of one another (if they were, then they're be no relation to find) I'll give it a try. If x is your independent (or predictor) variable and y is your dependent...
Fitting an exponential model to data I am not completely sure what you're asking, because your lingo is off. But assuming that your variables aren't independent of one another (if they were, then they're be no relation to find) I'll give