idx int64 1 56k | question stringlengths 15 155 | answer stringlengths 2 29.2k ⌀ | question_cut stringlengths 15 100 | answer_cut stringlengths 2 200 ⌀ | conversation stringlengths 47 29.3k | conversation_cut stringlengths 47 301 |
|---|---|---|---|---|---|---|
48,301 | Is the idea of a bias-variance "tradeoff" a false construct? | First of all we have to say that bias-variance tradeoff (BVT) can be seen in respect not only of parameters estimators but also about prediction. Usually BVT is used in machine learning on prediction side and more precisely about the minimization of Expected Prediction Error (EPE). In this last sense the BVT was treat... | Is the idea of a bias-variance "tradeoff" a false construct? | First of all we have to say that bias-variance tradeoff (BVT) can be seen in respect not only of parameters estimators but also about prediction. Usually BVT is used in machine learning on prediction | Is the idea of a bias-variance "tradeoff" a false construct?
First of all we have to say that bias-variance tradeoff (BVT) can be seen in respect not only of parameters estimators but also about prediction. Usually BVT is used in machine learning on prediction side and more precisely about the minimization of Expected... | Is the idea of a bias-variance "tradeoff" a false construct?
First of all we have to say that bias-variance tradeoff (BVT) can be seen in respect not only of parameters estimators but also about prediction. Usually BVT is used in machine learning on prediction |
48,302 | Is the idea of a bias-variance "tradeoff" a false construct? | Who's to say that if you have an estimator $\hat{f}$ of $Y = f(X) + \epsilon$ that you couldn't find an estimator $\hat{g}$ that has not only lowers expected squared error, but has lower bias and variance than $\hat{f}$ as well?
A similar question was Bias / variance tradeoff math. In that question, it was asked if bi... | Is the idea of a bias-variance "tradeoff" a false construct? | Who's to say that if you have an estimator $\hat{f}$ of $Y = f(X) + \epsilon$ that you couldn't find an estimator $\hat{g}$ that has not only lowers expected squared error, but has lower bias and vari | Is the idea of a bias-variance "tradeoff" a false construct?
Who's to say that if you have an estimator $\hat{f}$ of $Y = f(X) + \epsilon$ that you couldn't find an estimator $\hat{g}$ that has not only lowers expected squared error, but has lower bias and variance than $\hat{f}$ as well?
A similar question was Bias /... | Is the idea of a bias-variance "tradeoff" a false construct?
Who's to say that if you have an estimator $\hat{f}$ of $Y = f(X) + \epsilon$ that you couldn't find an estimator $\hat{g}$ that has not only lowers expected squared error, but has lower bias and vari |
48,303 | Empirical Risk Minimization: Rewriting the expected loss using Bayes' rule and the definition of expectation | I'll assume continuous distributions here but, if any variable is discrete, simply replace the corresponding integral with a sum. Recall that the expectation of a function $f$ with respect to a continuous distribution $p(z)$ is:
$$E_{z \sim p(z)}\big[f(z)\big] = \int_\mathcal{Z} p(z) f(z) dz$$
The objective function in... | Empirical Risk Minimization: Rewriting the expected loss using Bayes' rule and the definition of exp | I'll assume continuous distributions here but, if any variable is discrete, simply replace the corresponding integral with a sum. Recall that the expectation of a function $f$ with respect to a contin | Empirical Risk Minimization: Rewriting the expected loss using Bayes' rule and the definition of expectation
I'll assume continuous distributions here but, if any variable is discrete, simply replace the corresponding integral with a sum. Recall that the expectation of a function $f$ with respect to a continuous distri... | Empirical Risk Minimization: Rewriting the expected loss using Bayes' rule and the definition of exp
I'll assume continuous distributions here but, if any variable is discrete, simply replace the corresponding integral with a sum. Recall that the expectation of a function $f$ with respect to a contin |
48,304 | Deriving posterior update equation in a Variational Bayes inference | Joint distribution.
Using the graphical model you provided, we get the following joint distribution over all variables of interest, conditioning on model parameters.
$$p(\Theta, \mathbf{v} | a_0, b_0, c_0, d_0, \left\{e_0^s, f_0^s \right\}_{s = 0,1}, \left \{ e_0^{s0}, f_0^{s0}, e_0^{s1}, f_0^{s1} \right\}_{s=2:L})$$
I... | Deriving posterior update equation in a Variational Bayes inference | Joint distribution.
Using the graphical model you provided, we get the following joint distribution over all variables of interest, conditioning on model parameters.
$$p(\Theta, \mathbf{v} | a_0, b_0, | Deriving posterior update equation in a Variational Bayes inference
Joint distribution.
Using the graphical model you provided, we get the following joint distribution over all variables of interest, conditioning on model parameters.
$$p(\Theta, \mathbf{v} | a_0, b_0, c_0, d_0, \left\{e_0^s, f_0^s \right\}_{s = 0,1}, \... | Deriving posterior update equation in a Variational Bayes inference
Joint distribution.
Using the graphical model you provided, we get the following joint distribution over all variables of interest, conditioning on model parameters.
$$p(\Theta, \mathbf{v} | a_0, b_0, |
48,305 | Why do we not interpret main effects if interaction terms are significant in ANOVA? | Suppose that we have the following regression relationship:
$y=\beta_0 + \beta_1 X + \beta_2 Z + \beta_3 X \times Z + \varepsilon$.
If there is no the interaction term, i.e., $y=\beta_0 + \beta_1 X + \beta_2 Z + \varepsilon$, we can interpret the main effect as usual: "Keeping other variable, changing one unit in $X$ a... | Why do we not interpret main effects if interaction terms are significant in ANOVA? | Suppose that we have the following regression relationship:
$y=\beta_0 + \beta_1 X + \beta_2 Z + \beta_3 X \times Z + \varepsilon$.
If there is no the interaction term, i.e., $y=\beta_0 + \beta_1 X + | Why do we not interpret main effects if interaction terms are significant in ANOVA?
Suppose that we have the following regression relationship:
$y=\beta_0 + \beta_1 X + \beta_2 Z + \beta_3 X \times Z + \varepsilon$.
If there is no the interaction term, i.e., $y=\beta_0 + \beta_1 X + \beta_2 Z + \varepsilon$, we can int... | Why do we not interpret main effects if interaction terms are significant in ANOVA?
Suppose that we have the following regression relationship:
$y=\beta_0 + \beta_1 X + \beta_2 Z + \beta_3 X \times Z + \varepsilon$.
If there is no the interaction term, i.e., $y=\beta_0 + \beta_1 X + |
48,306 | Why do we not interpret main effects if interaction terms are significant in ANOVA? | This is something which is pretty well discussed in chapter 8 of John Fox's book, Applied Regression Analysis and Generalized Linear Models, or Weisberg's Applied Linear Regression. Both emphasize that your question is related to Nelder's (1977) principle of marginality.
From this last book for example:
The approach t... | Why do we not interpret main effects if interaction terms are significant in ANOVA? | This is something which is pretty well discussed in chapter 8 of John Fox's book, Applied Regression Analysis and Generalized Linear Models, or Weisberg's Applied Linear Regression. Both emphasize tha | Why do we not interpret main effects if interaction terms are significant in ANOVA?
This is something which is pretty well discussed in chapter 8 of John Fox's book, Applied Regression Analysis and Generalized Linear Models, or Weisberg's Applied Linear Regression. Both emphasize that your question is related to Nelder... | Why do we not interpret main effects if interaction terms are significant in ANOVA?
This is something which is pretty well discussed in chapter 8 of John Fox's book, Applied Regression Analysis and Generalized Linear Models, or Weisberg's Applied Linear Regression. Both emphasize tha |
48,307 | Bias and variance in KNN and decision trees | Fewer neighbors usually mean closer neighbors (unless there are multiple close neighbors with equal distance from the point of interest $x_0$). Modelling $x_0$ as a function of only the few closest neighbors, i.e. the most similar data points, allows for high flexibility (utilizing the features of the closest data poin... | Bias and variance in KNN and decision trees | Fewer neighbors usually mean closer neighbors (unless there are multiple close neighbors with equal distance from the point of interest $x_0$). Modelling $x_0$ as a function of only the few closest ne | Bias and variance in KNN and decision trees
Fewer neighbors usually mean closer neighbors (unless there are multiple close neighbors with equal distance from the point of interest $x_0$). Modelling $x_0$ as a function of only the few closest neighbors, i.e. the most similar data points, allows for high flexibility (uti... | Bias and variance in KNN and decision trees
Fewer neighbors usually mean closer neighbors (unless there are multiple close neighbors with equal distance from the point of interest $x_0$). Modelling $x_0$ as a function of only the few closest ne |
48,308 | probability calibration and Brier score | The short answer is that it only makes sense to calculate the Brier score for the conditional probabilities, $\hat y = P(y=1|X)$, where $y$ is the outcome, $\hat y$ is your prediction, and $X$ are your predictors.
In other words, $\hat y$ is the probability that $y=1$, conditional on this particular value of the predic... | probability calibration and Brier score | The short answer is that it only makes sense to calculate the Brier score for the conditional probabilities, $\hat y = P(y=1|X)$, where $y$ is the outcome, $\hat y$ is your prediction, and $X$ are you | probability calibration and Brier score
The short answer is that it only makes sense to calculate the Brier score for the conditional probabilities, $\hat y = P(y=1|X)$, where $y$ is the outcome, $\hat y$ is your prediction, and $X$ are your predictors.
In other words, $\hat y$ is the probability that $y=1$, conditiona... | probability calibration and Brier score
The short answer is that it only makes sense to calculate the Brier score for the conditional probabilities, $\hat y = P(y=1|X)$, where $y$ is the outcome, $\hat y$ is your prediction, and $X$ are you |
48,309 | How to interpret GLMM results? | There are 2 main problems here:
As with other linear models there is no requirement for the outcome variable to be normally distributed in a linear mixed effects model. So shapiro.test(x = Incidence$Inc.) is a waste of time and so is any procedure that tries to find the distribution of the outcome, such as fit.cont th... | How to interpret GLMM results? | There are 2 main problems here:
As with other linear models there is no requirement for the outcome variable to be normally distributed in a linear mixed effects model. So shapiro.test(x = Incidence$ | How to interpret GLMM results?
There are 2 main problems here:
As with other linear models there is no requirement for the outcome variable to be normally distributed in a linear mixed effects model. So shapiro.test(x = Incidence$Inc.) is a waste of time and so is any procedure that tries to find the distribution of t... | How to interpret GLMM results?
There are 2 main problems here:
As with other linear models there is no requirement for the outcome variable to be normally distributed in a linear mixed effects model. So shapiro.test(x = Incidence$ |
48,310 | On the difference between the main effect in a one-factor and a two-factor regression | $b_2$ here corresponds to the conditional effect of $X_2$ when $X_1=0$. A common mistake is to understand $b_2$ as being the main effect of $X_2$, i.e. the average effect of $X_2$ over all possible values of $X_1$.
Indeed. I typically answer at least one question per week where this mistake is made. It it also worth ... | On the difference between the main effect in a one-factor and a two-factor regression | $b_2$ here corresponds to the conditional effect of $X_2$ when $X_1=0$. A common mistake is to understand $b_2$ as being the main effect of $X_2$, i.e. the average effect of $X_2$ over all possible v | On the difference between the main effect in a one-factor and a two-factor regression
$b_2$ here corresponds to the conditional effect of $X_2$ when $X_1=0$. A common mistake is to understand $b_2$ as being the main effect of $X_2$, i.e. the average effect of $X_2$ over all possible values of $X_1$.
Indeed. I typical... | On the difference between the main effect in a one-factor and a two-factor regression
$b_2$ here corresponds to the conditional effect of $X_2$ when $X_1=0$. A common mistake is to understand $b_2$ as being the main effect of $X_2$, i.e. the average effect of $X_2$ over all possible v |
48,311 | On the difference between the main effect in a one-factor and a two-factor regression | Adding to @RobertLong's answer, there is a slight conceptual mistake in the way $b_2$ is described in the question in the case where $X_1$ was centered. It is indeed true that $b_2$ becomes the average effect of $X_2$ over all possible values of $X_1$, in the sense that $\overline{b_2+b_3X_1}=b_2$, but it should be emp... | On the difference between the main effect in a one-factor and a two-factor regression | Adding to @RobertLong's answer, there is a slight conceptual mistake in the way $b_2$ is described in the question in the case where $X_1$ was centered. It is indeed true that $b_2$ becomes the averag | On the difference between the main effect in a one-factor and a two-factor regression
Adding to @RobertLong's answer, there is a slight conceptual mistake in the way $b_2$ is described in the question in the case where $X_1$ was centered. It is indeed true that $b_2$ becomes the average effect of $X_2$ over all possibl... | On the difference between the main effect in a one-factor and a two-factor regression
Adding to @RobertLong's answer, there is a slight conceptual mistake in the way $b_2$ is described in the question in the case where $X_1$ was centered. It is indeed true that $b_2$ becomes the averag |
48,312 | Are Brier and log-loss proper or strictly proper scoring rules? | Both are strictly proper. See Selten ("Axiomatic Characterization of the Quadratic
Scoring Rule", Experimental Economics, 1998), who uses the term "incentive compatible" in place of "strictly proper". His proofs work with distributions with finite support, but apply to the continuous case as well, with the necessary mo... | Are Brier and log-loss proper or strictly proper scoring rules? | Both are strictly proper. See Selten ("Axiomatic Characterization of the Quadratic
Scoring Rule", Experimental Economics, 1998), who uses the term "incentive compatible" in place of "strictly proper". | Are Brier and log-loss proper or strictly proper scoring rules?
Both are strictly proper. See Selten ("Axiomatic Characterization of the Quadratic
Scoring Rule", Experimental Economics, 1998), who uses the term "incentive compatible" in place of "strictly proper". His proofs work with distributions with finite support,... | Are Brier and log-loss proper or strictly proper scoring rules?
Both are strictly proper. See Selten ("Axiomatic Characterization of the Quadratic
Scoring Rule", Experimental Economics, 1998), who uses the term "incentive compatible" in place of "strictly proper". |
48,313 | Intuition behind m-out-of-n bootstrap | I would argue that it's not so much that the $m$ of $n$ bootstrap does smoothing as that it makes smoothing unnecessary.
There are two components to the $m$ of $n$ bootstrap. The first is sampling just $m$ observations; the second is knowing the convergence rate.
A big part of the advantage of the subsampling is being... | Intuition behind m-out-of-n bootstrap | I would argue that it's not so much that the $m$ of $n$ bootstrap does smoothing as that it makes smoothing unnecessary.
There are two components to the $m$ of $n$ bootstrap. The first is sampling ju | Intuition behind m-out-of-n bootstrap
I would argue that it's not so much that the $m$ of $n$ bootstrap does smoothing as that it makes smoothing unnecessary.
There are two components to the $m$ of $n$ bootstrap. The first is sampling just $m$ observations; the second is knowing the convergence rate.
A big part of the... | Intuition behind m-out-of-n bootstrap
I would argue that it's not so much that the $m$ of $n$ bootstrap does smoothing as that it makes smoothing unnecessary.
There are two components to the $m$ of $n$ bootstrap. The first is sampling ju |
48,314 | Optimization as sampling for stochastic functions | To expand upon the solution which is hinted at in the answer of @Xi'an:
Assume that $f$ is represented as
$$f(x) = \mathbf{E}_{\rho(\xi)} \left[ F(x, \xi) \right]$$
where $\xi$ is some auxiliary source of randomness, and $0 \leqslant F(x, \xi) \leqslant 1$ for all $(x, \xi)$.
One can then develop
\begin{align}
\exp(-\b... | Optimization as sampling for stochastic functions | To expand upon the solution which is hinted at in the answer of @Xi'an:
Assume that $f$ is represented as
$$f(x) = \mathbf{E}_{\rho(\xi)} \left[ F(x, \xi) \right]$$
where $\xi$ is some auxiliary sourc | Optimization as sampling for stochastic functions
To expand upon the solution which is hinted at in the answer of @Xi'an:
Assume that $f$ is represented as
$$f(x) = \mathbf{E}_{\rho(\xi)} \left[ F(x, \xi) \right]$$
where $\xi$ is some auxiliary source of randomness, and $0 \leqslant F(x, \xi) \leqslant 1$ for all $(x, ... | Optimization as sampling for stochastic functions
To expand upon the solution which is hinted at in the answer of @Xi'an:
Assume that $f$ is represented as
$$f(x) = \mathbf{E}_{\rho(\xi)} \left[ F(x, \xi) \right]$$
where $\xi$ is some auxiliary sourc |
48,315 | Optimization as sampling for stochastic functions | This is a very interesting question for which there is no clear-cut answer. It all depends on the computing budget and the output of a realistic will depend on this computing budget.
My suggestion would be to mix
(i) simulated annealing, that is, simulating from a target like $$h_t(x)\propto e^{-T_t \cdot \mathbb E[f(x... | Optimization as sampling for stochastic functions | This is a very interesting question for which there is no clear-cut answer. It all depends on the computing budget and the output of a realistic will depend on this computing budget.
My suggestion wou | Optimization as sampling for stochastic functions
This is a very interesting question for which there is no clear-cut answer. It all depends on the computing budget and the output of a realistic will depend on this computing budget.
My suggestion would be to mix
(i) simulated annealing, that is, simulating from a targe... | Optimization as sampling for stochastic functions
This is a very interesting question for which there is no clear-cut answer. It all depends on the computing budget and the output of a realistic will depend on this computing budget.
My suggestion wou |
48,316 | Test if function "raises faster then linear" | Saying that a function “raises faster then linear” essencialy means that its derivative increases, meaning, its second derivative is positive.
The way you approximate the second derivative of a function is with a parabola. This is true for Taylor decomposition, when you want to approximate a function starting from a po... | Test if function "raises faster then linear" | Saying that a function “raises faster then linear” essencialy means that its derivative increases, meaning, its second derivative is positive.
The way you approximate the second derivative of a functi | Test if function "raises faster then linear"
Saying that a function “raises faster then linear” essencialy means that its derivative increases, meaning, its second derivative is positive.
The way you approximate the second derivative of a function is with a parabola. This is true for Taylor decomposition, when you want... | Test if function "raises faster then linear"
Saying that a function “raises faster then linear” essencialy means that its derivative increases, meaning, its second derivative is positive.
The way you approximate the second derivative of a functi |
48,317 | Test if function "raises faster then linear" | Assuming you already know that $f$ is increasing, we can further posit that it increases super-linearly if its first derivative is monotone increasing in $x$ (this also makes it a convex function). Since we’re working with a discrete, countable set of observations
$$\{ (x_1, f_1) , (x_2, f_2), \dots, (x_n, f_n) \}$$
we... | Test if function "raises faster then linear" | Assuming you already know that $f$ is increasing, we can further posit that it increases super-linearly if its first derivative is monotone increasing in $x$ (this also makes it a convex function). Si | Test if function "raises faster then linear"
Assuming you already know that $f$ is increasing, we can further posit that it increases super-linearly if its first derivative is monotone increasing in $x$ (this also makes it a convex function). Since we’re working with a discrete, countable set of observations
$$\{ (x_1,... | Test if function "raises faster then linear"
Assuming you already know that $f$ is increasing, we can further posit that it increases super-linearly if its first derivative is monotone increasing in $x$ (this also makes it a convex function). Si |
48,318 | Are Neural Networks Mixture Models? | They both fall into the general domain of graphical models.
As you've pointed out, they are very similar to each other, for they both have hidden layers and both require iterative methods to perform inference tasks.
But they are proposed on different initial ideas. "Neural network" was originally proposed by the conne... | Are Neural Networks Mixture Models? | They both fall into the general domain of graphical models.
As you've pointed out, they are very similar to each other, for they both have hidden layers and both require iterative methods to perform i | Are Neural Networks Mixture Models?
They both fall into the general domain of graphical models.
As you've pointed out, they are very similar to each other, for they both have hidden layers and both require iterative methods to perform inference tasks.
But they are proposed on different initial ideas. "Neural network" ... | Are Neural Networks Mixture Models?
They both fall into the general domain of graphical models.
As you've pointed out, they are very similar to each other, for they both have hidden layers and both require iterative methods to perform i |
48,319 | Are Neural Networks Mixture Models? | no, neural network per-se are not mixture models, tho ideas from mixture models has influenced some of the neural network modules like softmax attention.
mixture models require a mixing in the probability density function, corresponding to the logsumexp() function. Some NN uses logsumexp, on pdf quantities and non-pdf ... | Are Neural Networks Mixture Models? | no, neural network per-se are not mixture models, tho ideas from mixture models has influenced some of the neural network modules like softmax attention.
mixture models require a mixing in the probabi | Are Neural Networks Mixture Models?
no, neural network per-se are not mixture models, tho ideas from mixture models has influenced some of the neural network modules like softmax attention.
mixture models require a mixing in the probability density function, corresponding to the logsumexp() function. Some NN uses logsu... | Are Neural Networks Mixture Models?
no, neural network per-se are not mixture models, tho ideas from mixture models has influenced some of the neural network modules like softmax attention.
mixture models require a mixing in the probabi |
48,320 | Are Neural Networks Mixture Models? | Also to my understanding, in a neural network classifier with 1 hidden layer, you have a mixture of functions (sigmoids, relus, etc) that are aggregated
...
So my question is: do neural networks fall in the general domain of mixture models?
If so, why are they never referred to as such?
You can consider a single layer... | Are Neural Networks Mixture Models? | Also to my understanding, in a neural network classifier with 1 hidden layer, you have a mixture of functions (sigmoids, relus, etc) that are aggregated
...
So my question is: do neural networks fall | Are Neural Networks Mixture Models?
Also to my understanding, in a neural network classifier with 1 hidden layer, you have a mixture of functions (sigmoids, relus, etc) that are aggregated
...
So my question is: do neural networks fall in the general domain of mixture models?
If so, why are they never referred to as su... | Are Neural Networks Mixture Models?
Also to my understanding, in a neural network classifier with 1 hidden layer, you have a mixture of functions (sigmoids, relus, etc) that are aggregated
...
So my question is: do neural networks fall |
48,321 | Relation between Pi and the Median | To answer your specific question about the median, there's a passage in your second link, in the section on sample median:
Sampling distribution
... The distribution of the sample median from a population with a density function $f(x)$ is asymptotically normal with mean $m$ and variance
$\begin{align}\frac{1}{4nf(m)^2... | Relation between Pi and the Median | To answer your specific question about the median, there's a passage in your second link, in the section on sample median:
Sampling distribution
... The distribution of the sample median from a popul | Relation between Pi and the Median
To answer your specific question about the median, there's a passage in your second link, in the section on sample median:
Sampling distribution
... The distribution of the sample median from a population with a density function $f(x)$ is asymptotically normal with mean $m$ and varia... | Relation between Pi and the Median
To answer your specific question about the median, there's a passage in your second link, in the section on sample median:
Sampling distribution
... The distribution of the sample median from a popul |
48,322 | Wilcoxon Rank Sum Test vs $t$-test Power Simulation | You say "It is well known that the Wilcoxon rank sum test is more powerful for detecting shifts in location when the data is non-normal" but stated so generally, this is not actually the case. It is not well known at all (for all that it might be widely believed), because it's not true.
For some non-normal distribution... | Wilcoxon Rank Sum Test vs $t$-test Power Simulation | You say "It is well known that the Wilcoxon rank sum test is more powerful for detecting shifts in location when the data is non-normal" but stated so generally, this is not actually the case. It is n | Wilcoxon Rank Sum Test vs $t$-test Power Simulation
You say "It is well known that the Wilcoxon rank sum test is more powerful for detecting shifts in location when the data is non-normal" but stated so generally, this is not actually the case. It is not well known at all (for all that it might be widely believed), bec... | Wilcoxon Rank Sum Test vs $t$-test Power Simulation
You say "It is well known that the Wilcoxon rank sum test is more powerful for detecting shifts in location when the data is non-normal" but stated so generally, this is not actually the case. It is n |
48,323 | Sufficient statistics are not unique? | No, if they were unique, a transformation of the sufficient statistic wouldn't be a sufficient statistic (unless it is identity transformation). For example, let $T(\mathbf{y})=\bar y$, i.e. the sample mean. A 1-1 transformation here would be scaling, i.e. $T_1(\mathbf{y})=2\bar y$. This means any information that can ... | Sufficient statistics are not unique? | No, if they were unique, a transformation of the sufficient statistic wouldn't be a sufficient statistic (unless it is identity transformation). For example, let $T(\mathbf{y})=\bar y$, i.e. the sampl | Sufficient statistics are not unique?
No, if they were unique, a transformation of the sufficient statistic wouldn't be a sufficient statistic (unless it is identity transformation). For example, let $T(\mathbf{y})=\bar y$, i.e. the sample mean. A 1-1 transformation here would be scaling, i.e. $T_1(\mathbf{y})=2\bar y$... | Sufficient statistics are not unique?
No, if they were unique, a transformation of the sufficient statistic wouldn't be a sufficient statistic (unless it is identity transformation). For example, let $T(\mathbf{y})=\bar y$, i.e. the sampl |
48,324 | Confusion about Karush-Kuhn-Tucker conditions in SVM derivation | I've got the answer, thanks to DanielTheRocketMan for providing half of it!
For $x^{(i)}$ that is a support vector, following equality holds:
$$
y^{(i)} (w^Tx^{(i)} + b) = 1
$$
This satisfies the constraint irrespective of $\alpha_i$
For $x^{(i)}$ that is not a support vector, following inequality holds:
$$
y^{(i)} (w^... | Confusion about Karush-Kuhn-Tucker conditions in SVM derivation | I've got the answer, thanks to DanielTheRocketMan for providing half of it!
For $x^{(i)}$ that is a support vector, following equality holds:
$$
y^{(i)} (w^Tx^{(i)} + b) = 1
$$
This satisfies the cons | Confusion about Karush-Kuhn-Tucker conditions in SVM derivation
I've got the answer, thanks to DanielTheRocketMan for providing half of it!
For $x^{(i)}$ that is a support vector, following equality holds:
$$
y^{(i)} (w^Tx^{(i)} + b) = 1
$$
This satisfies the constraint irrespective of $\alpha_i$
For $x^{(i)}$ that is ... | Confusion about Karush-Kuhn-Tucker conditions in SVM derivation
I've got the answer, thanks to DanielTheRocketMan for providing half of it!
For $x^{(i)}$ that is a support vector, following equality holds:
$$
y^{(i)} (w^Tx^{(i)} + b) = 1
$$
This satisfies the cons |
48,325 | Confusion about Karush-Kuhn-Tucker conditions in SVM derivation | I believe that your problem is that you are not seeing the geometry of the problem. See the figure in the wikipedia.
Since your points are separable, you can always find a vector $w$ and $b$ that ensures this constraint.
Note that $y_i=1$ or $y_i=-1$
You basically must choose $b+w x=0\;\;\; (1) \;\;$ and $\;\;\;\;1/||w... | Confusion about Karush-Kuhn-Tucker conditions in SVM derivation | I believe that your problem is that you are not seeing the geometry of the problem. See the figure in the wikipedia.
Since your points are separable, you can always find a vector $w$ and $b$ that ensu | Confusion about Karush-Kuhn-Tucker conditions in SVM derivation
I believe that your problem is that you are not seeing the geometry of the problem. See the figure in the wikipedia.
Since your points are separable, you can always find a vector $w$ and $b$ that ensures this constraint.
Note that $y_i=1$ or $y_i=-1$
You b... | Confusion about Karush-Kuhn-Tucker conditions in SVM derivation
I believe that your problem is that you are not seeing the geometry of the problem. See the figure in the wikipedia.
Since your points are separable, you can always find a vector $w$ and $b$ that ensu |
48,326 | lme4 - correct formula for a crossed factor nested mixed model | It makes very little sense to me to treat Month as fixed, while also having Light and Nutrients nested within it. Either you should treat Month and fixed and Light, in which case the random part of the formula will be:
( 1| Month) + (1 | Light)
... Or you treat Month as random, in which case you have Light nested with... | lme4 - correct formula for a crossed factor nested mixed model | It makes very little sense to me to treat Month as fixed, while also having Light and Nutrients nested within it. Either you should treat Month and fixed and Light, in which case the random part of th | lme4 - correct formula for a crossed factor nested mixed model
It makes very little sense to me to treat Month as fixed, while also having Light and Nutrients nested within it. Either you should treat Month and fixed and Light, in which case the random part of the formula will be:
( 1| Month) + (1 | Light)
... Or you ... | lme4 - correct formula for a crossed factor nested mixed model
It makes very little sense to me to treat Month as fixed, while also having Light and Nutrients nested within it. Either you should treat Month and fixed and Light, in which case the random part of th |
48,327 | What is the disadvantage of repeated cross-validation? | There is no disadvantage in doing repeated CV in comparison with a single CV fold. If anything, repeated CV should decrease the variance of our estimate.
An excellent and highly-cited overview on cross-validation procedures can be found in Arlot & Celisse (2010) A survey of cross-validation procedures for model selecti... | What is the disadvantage of repeated cross-validation? | There is no disadvantage in doing repeated CV in comparison with a single CV fold. If anything, repeated CV should decrease the variance of our estimate.
An excellent and highly-cited overview on cros | What is the disadvantage of repeated cross-validation?
There is no disadvantage in doing repeated CV in comparison with a single CV fold. If anything, repeated CV should decrease the variance of our estimate.
An excellent and highly-cited overview on cross-validation procedures can be found in Arlot & Celisse (2010) A ... | What is the disadvantage of repeated cross-validation?
There is no disadvantage in doing repeated CV in comparison with a single CV fold. If anything, repeated CV should decrease the variance of our estimate.
An excellent and highly-cited overview on cros |
48,328 | What is the disadvantage of repeated cross-validation? | With regards to the question of disadvantage, I think that needs refining.
Disadvantage compared to k-fold CV? For large samples, it's computational time (as you noted). For small samples, there is no apparent disadvantage for repeated k-fold CV.
Disadvantage compared to bootstrapping? For small samples, bootstrapping... | What is the disadvantage of repeated cross-validation? | With regards to the question of disadvantage, I think that needs refining.
Disadvantage compared to k-fold CV? For large samples, it's computational time (as you noted). For small samples, there is n | What is the disadvantage of repeated cross-validation?
With regards to the question of disadvantage, I think that needs refining.
Disadvantage compared to k-fold CV? For large samples, it's computational time (as you noted). For small samples, there is no apparent disadvantage for repeated k-fold CV.
Disadvantage comp... | What is the disadvantage of repeated cross-validation?
With regards to the question of disadvantage, I think that needs refining.
Disadvantage compared to k-fold CV? For large samples, it's computational time (as you noted). For small samples, there is n |
48,329 | Difference between Wald test and Chi-squared test | The relationship between the Wald test and the Pearson $\chi^2$ is a particular example of the relationship between Wald tests and score tests.
The Wald test statistic for the difference between a value of a parameter $\hat \theta$ estimated from a data sample and a null-hypothesis value $\theta_0$ is:
$$W = \frac{ ( ... | Difference between Wald test and Chi-squared test | The relationship between the Wald test and the Pearson $\chi^2$ is a particular example of the relationship between Wald tests and score tests.
The Wald test statistic for the difference between a va | Difference between Wald test and Chi-squared test
The relationship between the Wald test and the Pearson $\chi^2$ is a particular example of the relationship between Wald tests and score tests.
The Wald test statistic for the difference between a value of a parameter $\hat \theta$ estimated from a data sample and a nu... | Difference between Wald test and Chi-squared test
The relationship between the Wald test and the Pearson $\chi^2$ is a particular example of the relationship between Wald tests and score tests.
The Wald test statistic for the difference between a va |
48,330 | Is it possible to simultaneously call multiple regression coefficients significant? | It's true that with 40 variables, you would expect two to be significant (when using the conventional alpha of .05) by chance alone. That's why we shouldn't interpret the individual t-tests for a multiple regression model until after assessing the F-test for the model taken as a whole. (It may help you to read my ans... | Is it possible to simultaneously call multiple regression coefficients significant? | It's true that with 40 variables, you would expect two to be significant (when using the conventional alpha of .05) by chance alone. That's why we shouldn't interpret the individual t-tests for a mul | Is it possible to simultaneously call multiple regression coefficients significant?
It's true that with 40 variables, you would expect two to be significant (when using the conventional alpha of .05) by chance alone. That's why we shouldn't interpret the individual t-tests for a multiple regression model until after a... | Is it possible to simultaneously call multiple regression coefficients significant?
It's true that with 40 variables, you would expect two to be significant (when using the conventional alpha of .05) by chance alone. That's why we shouldn't interpret the individual t-tests for a mul |
48,331 | Is it possible to simultaneously call multiple regression coefficients significant? | If I'm reading your question correctly you are simply asking about multiple comparison in statistics which is a well known phenomena. To remedy this, you can correct your significance using something like a Bonferroni-correction, although many types of correction methods exist.
Note that you also need to consider the i... | Is it possible to simultaneously call multiple regression coefficients significant? | If I'm reading your question correctly you are simply asking about multiple comparison in statistics which is a well known phenomena. To remedy this, you can correct your significance using something | Is it possible to simultaneously call multiple regression coefficients significant?
If I'm reading your question correctly you are simply asking about multiple comparison in statistics which is a well known phenomena. To remedy this, you can correct your significance using something like a Bonferroni-correction, althou... | Is it possible to simultaneously call multiple regression coefficients significant?
If I'm reading your question correctly you are simply asking about multiple comparison in statistics which is a well known phenomena. To remedy this, you can correct your significance using something |
48,332 | Representing a GAM with truncated power basis as a mixed model | The model you discuss in your quesiton can be written as
$$
y = X \beta + F b+e
$$
where $X$ is the matrix with columns equal to $1, x, x^2, x^3,...$ and $F$ is a matrix which columns are obtained by computing the truncated polinomials.
The (penalized) objective function is then:
$$
Q_{p} = \|y - X \beta + F b\|^2 + k... | Representing a GAM with truncated power basis as a mixed model | The model you discuss in your quesiton can be written as
$$
y = X \beta + F b+e
$$
where $X$ is the matrix with columns equal to $1, x, x^2, x^3,...$ and $F$ is a matrix which columns are obtained by | Representing a GAM with truncated power basis as a mixed model
The model you discuss in your quesiton can be written as
$$
y = X \beta + F b+e
$$
where $X$ is the matrix with columns equal to $1, x, x^2, x^3,...$ and $F$ is a matrix which columns are obtained by computing the truncated polinomials.
The (penalized) obj... | Representing a GAM with truncated power basis as a mixed model
The model you discuss in your quesiton can be written as
$$
y = X \beta + F b+e
$$
where $X$ is the matrix with columns equal to $1, x, x^2, x^3,...$ and $F$ is a matrix which columns are obtained by |
48,333 | $cor(B_1,Y) > cor(B_2,Y) > 0$ but $cor(A + B_1, A+Y) < cor(A + B_2, A+Y)$. Is this possible? | Because correlation tells you nothing about the magnitudes of variables, you can reverse their relative order by adjusting the magnitudes suitably.
Here, for instance, is a scatterplot matrix of some $(Y, B_1, B_2)$ data:
Clearly $Y$ is more highly correlated with $B_1$ than with $B_2.$
To help us appreciate the varia... | $cor(B_1,Y) > cor(B_2,Y) > 0$ but $cor(A + B_1, A+Y) < cor(A + B_2, A+Y)$. Is this possible? | Because correlation tells you nothing about the magnitudes of variables, you can reverse their relative order by adjusting the magnitudes suitably.
Here, for instance, is a scatterplot matrix of some | $cor(B_1,Y) > cor(B_2,Y) > 0$ but $cor(A + B_1, A+Y) < cor(A + B_2, A+Y)$. Is this possible?
Because correlation tells you nothing about the magnitudes of variables, you can reverse their relative order by adjusting the magnitudes suitably.
Here, for instance, is a scatterplot matrix of some $(Y, B_1, B_2)$ data:
Clea... | $cor(B_1,Y) > cor(B_2,Y) > 0$ but $cor(A + B_1, A+Y) < cor(A + B_2, A+Y)$. Is this possible?
Because correlation tells you nothing about the magnitudes of variables, you can reverse their relative order by adjusting the magnitudes suitably.
Here, for instance, is a scatterplot matrix of some |
48,334 | Why is regularization used only in training but not in testing? | I think there is a slight confusion.
What the author means is that during testing we focus on the MSE as this is what we evaluate performance on. The MSE plus the regularisation penalty is what we use to fit the model with. We are using the "same model"; just what we measure during training and testing is not the same.... | Why is regularization used only in training but not in testing? | I think there is a slight confusion.
What the author means is that during testing we focus on the MSE as this is what we evaluate performance on. The MSE plus the regularisation penalty is what we use | Why is regularization used only in training but not in testing?
I think there is a slight confusion.
What the author means is that during testing we focus on the MSE as this is what we evaluate performance on. The MSE plus the regularisation penalty is what we use to fit the model with. We are using the "same model"; j... | Why is regularization used only in training but not in testing?
I think there is a slight confusion.
What the author means is that during testing we focus on the MSE as this is what we evaluate performance on. The MSE plus the regularisation penalty is what we use |
48,335 | Linear Mixed Effects Model Variances | The terms within-individual variance and among-individual variance are not commonly found in the mixed effects model literature. It more commonly arises in the ANOVA literature, and rather than "among", the usual term is "between". Total variance is partitioned into that which is attributable to differences within indi... | Linear Mixed Effects Model Variances | The terms within-individual variance and among-individual variance are not commonly found in the mixed effects model literature. It more commonly arises in the ANOVA literature, and rather than "among | Linear Mixed Effects Model Variances
The terms within-individual variance and among-individual variance are not commonly found in the mixed effects model literature. It more commonly arises in the ANOVA literature, and rather than "among", the usual term is "between". Total variance is partitioned into that which is at... | Linear Mixed Effects Model Variances
The terms within-individual variance and among-individual variance are not commonly found in the mixed effects model literature. It more commonly arises in the ANOVA literature, and rather than "among |
48,336 | Linear Mixed Effects Model Variances | The following pieces of your model are fixed and known: $X_i$ and $Z_i$. The vector $\beta$ is fixed but unknown. You have two random pieces in your model, $b_i$ (I would expect this to be $b$, i.e. shared for all $i$, and I'm going to use $b$ below), and $\epsilon_i$. I suspect the setup of your problem also specifies... | Linear Mixed Effects Model Variances | The following pieces of your model are fixed and known: $X_i$ and $Z_i$. The vector $\beta$ is fixed but unknown. You have two random pieces in your model, $b_i$ (I would expect this to be $b$, i.e. s | Linear Mixed Effects Model Variances
The following pieces of your model are fixed and known: $X_i$ and $Z_i$. The vector $\beta$ is fixed but unknown. You have two random pieces in your model, $b_i$ (I would expect this to be $b$, i.e. shared for all $i$, and I'm going to use $b$ below), and $\epsilon_i$. I suspect the... | Linear Mixed Effects Model Variances
The following pieces of your model are fixed and known: $X_i$ and $Z_i$. The vector $\beta$ is fixed but unknown. You have two random pieces in your model, $b_i$ (I would expect this to be $b$, i.e. s |
48,337 | The ergodicity problem in economics | If your friend is talking about the expected wealth he's wrong.
The point of the figure is that the expected value of gambling is very much not indicative of what happens typically for longer trajectories, so your simulation just backs that up.
We can already see this pattern if we consider gambling twice. With probab... | The ergodicity problem in economics | If your friend is talking about the expected wealth he's wrong.
The point of the figure is that the expected value of gambling is very much not indicative of what happens typically for longer trajecto | The ergodicity problem in economics
If your friend is talking about the expected wealth he's wrong.
The point of the figure is that the expected value of gambling is very much not indicative of what happens typically for longer trajectories, so your simulation just backs that up.
We can already see this pattern if we ... | The ergodicity problem in economics
If your friend is talking about the expected wealth he's wrong.
The point of the figure is that the expected value of gambling is very much not indicative of what happens typically for longer trajecto |
48,338 | Why can't standard conditional language models be trained left-to-right *and* right-to-left? | In a standard language model (LM), you're trying to predict the probability of the next word given the past. The past could be a fixed window size of $n$ words, as in your example, or an indefinitely long window size, as in the case of RNNs (and their variants). Without loss of generality, let's stick with the RNN as t... | Why can't standard conditional language models be trained left-to-right *and* right-to-left? | In a standard language model (LM), you're trying to predict the probability of the next word given the past. The past could be a fixed window size of $n$ words, as in your example, or an indefinitely | Why can't standard conditional language models be trained left-to-right *and* right-to-left?
In a standard language model (LM), you're trying to predict the probability of the next word given the past. The past could be a fixed window size of $n$ words, as in your example, or an indefinitely long window size, as in the... | Why can't standard conditional language models be trained left-to-right *and* right-to-left?
In a standard language model (LM), you're trying to predict the probability of the next word given the past. The past could be a fixed window size of $n$ words, as in your example, or an indefinitely |
48,339 | Prove that $\frac{(n-2)s^2}{\sigma^2}\sim \chi^{2}_{n-2}$ | Without using the orthogonal change of variables in the linked answer, you can work under the general matrix setup of multiple linear regression. Here we are concerned with ordinary least squares.
The key result to be used here is the Fisher-Cochran theorem on distribution of quadratic forms (e.g. see page 185-186, 2... | Prove that $\frac{(n-2)s^2}{\sigma^2}\sim \chi^{2}_{n-2}$ | Without using the orthogonal change of variables in the linked answer, you can work under the general matrix setup of multiple linear regression. Here we are concerned with ordinary least squares.
T | Prove that $\frac{(n-2)s^2}{\sigma^2}\sim \chi^{2}_{n-2}$
Without using the orthogonal change of variables in the linked answer, you can work under the general matrix setup of multiple linear regression. Here we are concerned with ordinary least squares.
The key result to be used here is the Fisher-Cochran theorem on... | Prove that $\frac{(n-2)s^2}{\sigma^2}\sim \chi^{2}_{n-2}$
Without using the orthogonal change of variables in the linked answer, you can work under the general matrix setup of multiple linear regression. Here we are concerned with ordinary least squares.
T |
48,340 | Understanding glm and link functions: how to generate data? | Here's how to generate from a glm (the order of some items can be moved):
Choose your family and link function.
choose your predictors (IV's) for each observation you want to simulate.
Choose your coefficients.
Evaluate the linear predictor for each observation.
Transform by the inverse of the link function to get th... | Understanding glm and link functions: how to generate data? | Here's how to generate from a glm (the order of some items can be moved):
Choose your family and link function.
choose your predictors (IV's) for each observation you want to simulate.
Choose your co | Understanding glm and link functions: how to generate data?
Here's how to generate from a glm (the order of some items can be moved):
Choose your family and link function.
choose your predictors (IV's) for each observation you want to simulate.
Choose your coefficients.
Evaluate the linear predictor for each observat... | Understanding glm and link functions: how to generate data?
Here's how to generate from a glm (the order of some items can be moved):
Choose your family and link function.
choose your predictors (IV's) for each observation you want to simulate.
Choose your co |
48,341 | Understanding glm and link functions: how to generate data? | It matters whether the error term is included in the exp() call or not. That's your big issue for this code. Consider this:
# first, set the random seed, so that everything is reproducible
set.seed(5671)
N = 10000
e = rnorm(N,0,1)
x1 = runif(N,10,30)
y1 = exp(5*x1+ 10 + e)
y2 = exp(5*x1+ 10) + e
mod1.1 = glm(y1... | Understanding glm and link functions: how to generate data? | It matters whether the error term is included in the exp() call or not. That's your big issue for this code. Consider this:
# first, set the random seed, so that everything is reproducible
set.se | Understanding glm and link functions: how to generate data?
It matters whether the error term is included in the exp() call or not. That's your big issue for this code. Consider this:
# first, set the random seed, so that everything is reproducible
set.seed(5671)
N = 10000
e = rnorm(N,0,1)
x1 = runif(N,10,30)
y1... | Understanding glm and link functions: how to generate data?
It matters whether the error term is included in the exp() call or not. That's your big issue for this code. Consider this:
# first, set the random seed, so that everything is reproducible
set.se |
48,342 | Modelling longitudinal data with crossed random effects | First, note that the simulated data above results in a singular model fit because there is no variation in the response among any of the random factors. This can be overcome with a simple modification:
library(lme4)
set.seed(15)
participant <- rep(1:40, each = 30)
session <- rep(rep(1:3, each = 10), times = 40)
item... | Modelling longitudinal data with crossed random effects | First, note that the simulated data above results in a singular model fit because there is no variation in the response among any of the random factors. This can be overcome with a simple modificatio | Modelling longitudinal data with crossed random effects
First, note that the simulated data above results in a singular model fit because there is no variation in the response among any of the random factors. This can be overcome with a simple modification:
library(lme4)
set.seed(15)
participant <- rep(1:40, each = ... | Modelling longitudinal data with crossed random effects
First, note that the simulated data above results in a singular model fit because there is no variation in the response among any of the random factors. This can be overcome with a simple modificatio |
48,343 | how to scale the density plot for my histogram | The area under a true density function is 1. So unless the total area of the bars in the histogram is also 1, you cannot make a useful match between a true density function and the histogram.
Using actual density functions. A correct (and perhaps the easiest) course of action is to do what you explicitly say (without g... | how to scale the density plot for my histogram | The area under a true density function is 1. So unless the total area of the bars in the histogram is also 1, you cannot make a useful match between a true density function and the histogram.
Using ac | how to scale the density plot for my histogram
The area under a true density function is 1. So unless the total area of the bars in the histogram is also 1, you cannot make a useful match between a true density function and the histogram.
Using actual density functions. A correct (and perhaps the easiest) course of act... | how to scale the density plot for my histogram
The area under a true density function is 1. So unless the total area of the bars in the histogram is also 1, you cannot make a useful match between a true density function and the histogram.
Using ac |
48,344 | What's the difference between "Artificial neuron" and "Perceptron"? | Perceptron is an early type of a neural network for binary classification without hidden layers. It is a model of the form
$$
y=\sigma(\mathbf w^T \mathbf x)
$$
where $\sigma$ is the Heaviside step function. It can be trained using the perceptron algorithm.
You could say that perceptron is a neural network with a singl... | What's the difference between "Artificial neuron" and "Perceptron"? | Perceptron is an early type of a neural network for binary classification without hidden layers. It is a model of the form
$$
y=\sigma(\mathbf w^T \mathbf x)
$$
where $\sigma$ is the Heaviside step fu | What's the difference between "Artificial neuron" and "Perceptron"?
Perceptron is an early type of a neural network for binary classification without hidden layers. It is a model of the form
$$
y=\sigma(\mathbf w^T \mathbf x)
$$
where $\sigma$ is the Heaviside step function. It can be trained using the perceptron algor... | What's the difference between "Artificial neuron" and "Perceptron"?
Perceptron is an early type of a neural network for binary classification without hidden layers. It is a model of the form
$$
y=\sigma(\mathbf w^T \mathbf x)
$$
where $\sigma$ is the Heaviside step fu |
48,345 | What is the meaning of generating data from a probabilistic model such as a naive bayes classifier? | Having the ability to generate data from the model may be useful for many reasons, e.g.
Simulate the data from the model to judge if the representation of the reality by your model is reasonable, to conduct posterior predictive checks (compare distribution of the simulated data with the empirical data),
If you can gen... | What is the meaning of generating data from a probabilistic model such as a naive bayes classifier? | Having the ability to generate data from the model may be useful for many reasons, e.g.
Simulate the data from the model to judge if the representation of the reality by your model is reasonable, to | What is the meaning of generating data from a probabilistic model such as a naive bayes classifier?
Having the ability to generate data from the model may be useful for many reasons, e.g.
Simulate the data from the model to judge if the representation of the reality by your model is reasonable, to conduct posterior pr... | What is the meaning of generating data from a probabilistic model such as a naive bayes classifier?
Having the ability to generate data from the model may be useful for many reasons, e.g.
Simulate the data from the model to judge if the representation of the reality by your model is reasonable, to |
48,346 | What is the meaning of generating data from a probabilistic model such as a naive bayes classifier? | [Naive] bayes is a generative model, which means we can generate data using it if we wanted. In NB, we estimate $p(\mathbf{x}|y)$, where $\mathbf{x}$ is our feature vector and $y$ is the class variable. For example, we first pick a $y$, indicating the class, and then pick word(s) according to the probability distributi... | What is the meaning of generating data from a probabilistic model such as a naive bayes classifier? | [Naive] bayes is a generative model, which means we can generate data using it if we wanted. In NB, we estimate $p(\mathbf{x}|y)$, where $\mathbf{x}$ is our feature vector and $y$ is the class variabl | What is the meaning of generating data from a probabilistic model such as a naive bayes classifier?
[Naive] bayes is a generative model, which means we can generate data using it if we wanted. In NB, we estimate $p(\mathbf{x}|y)$, where $\mathbf{x}$ is our feature vector and $y$ is the class variable. For example, we f... | What is the meaning of generating data from a probabilistic model such as a naive bayes classifier?
[Naive] bayes is a generative model, which means we can generate data using it if we wanted. In NB, we estimate $p(\mathbf{x}|y)$, where $\mathbf{x}$ is our feature vector and $y$ is the class variabl |
48,347 | Statistical power of t-test in mildly skewed dataset | I will address the computation of the power of a one-sample t test.
Suppose we wish to use $n = 20$ observations from a normal distribution
to test $H_0: \mu = 110$ against $H_a: \mu < 110$ at the 5% level. Then
we will reject $H_0$ when the t statistic $T = \frac{\bar X - 110}{S/\sqrt{20}} < -1.729,$ where $S$ is the ... | Statistical power of t-test in mildly skewed dataset | I will address the computation of the power of a one-sample t test.
Suppose we wish to use $n = 20$ observations from a normal distribution
to test $H_0: \mu = 110$ against $H_a: \mu < 110$ at the 5% | Statistical power of t-test in mildly skewed dataset
I will address the computation of the power of a one-sample t test.
Suppose we wish to use $n = 20$ observations from a normal distribution
to test $H_0: \mu = 110$ against $H_a: \mu < 110$ at the 5% level. Then
we will reject $H_0$ when the t statistic $T = \frac{\b... | Statistical power of t-test in mildly skewed dataset
I will address the computation of the power of a one-sample t test.
Suppose we wish to use $n = 20$ observations from a normal distribution
to test $H_0: \mu = 110$ against $H_a: \mu < 110$ at the 5% |
48,348 | Statistical power of t-test in mildly skewed dataset | This has been discussed at length on this site. The t-test is not very robust to skewness. For example, with the log-normal distribution a sample size of 50,000 is not large enough for the t-based method to be sufficiently accurate. The Wilcoxon signed-rank one-sample test does not test a median. The Wilcoxon-Mann-... | Statistical power of t-test in mildly skewed dataset | This has been discussed at length on this site. The t-test is not very robust to skewness. For example, with the log-normal distribution a sample size of 50,000 is not large enough for the t-based m | Statistical power of t-test in mildly skewed dataset
This has been discussed at length on this site. The t-test is not very robust to skewness. For example, with the log-normal distribution a sample size of 50,000 is not large enough for the t-based method to be sufficiently accurate. The Wilcoxon signed-rank one-sa... | Statistical power of t-test in mildly skewed dataset
This has been discussed at length on this site. The t-test is not very robust to skewness. For example, with the log-normal distribution a sample size of 50,000 is not large enough for the t-based m |
48,349 | Statistical power of t-test in mildly skewed dataset | How does the equation for 𝑇 approach normal distribution?
Well, that it isn't quite true. The sampling distribution for that statistic approaches a standard normal in the limit as n grows large. That is very different. T
What is the "statistical power" that people talk about? Does it give information about how ma... | Statistical power of t-test in mildly skewed dataset | How does the equation for 𝑇 approach normal distribution?
Well, that it isn't quite true. The sampling distribution for that statistic approaches a standard normal in the limit as n grows large. Th | Statistical power of t-test in mildly skewed dataset
How does the equation for 𝑇 approach normal distribution?
Well, that it isn't quite true. The sampling distribution for that statistic approaches a standard normal in the limit as n grows large. That is very different. T
What is the "statistical power" that peo... | Statistical power of t-test in mildly skewed dataset
How does the equation for 𝑇 approach normal distribution?
Well, that it isn't quite true. The sampling distribution for that statistic approaches a standard normal in the limit as n grows large. Th |
48,350 | How could I find the prediction interval of a future observation given the present dataset? | There is a conventional concept that is a close match to your question: a nonparametric prediction interval. These are amazingly easy to compute and can work well with sufficiently large datasets.
A "prediction interval" is a statistical problem where you intend to use an initial set of data to establish limits betwee... | How could I find the prediction interval of a future observation given the present dataset? | There is a conventional concept that is a close match to your question: a nonparametric prediction interval. These are amazingly easy to compute and can work well with sufficiently large datasets.
A | How could I find the prediction interval of a future observation given the present dataset?
There is a conventional concept that is a close match to your question: a nonparametric prediction interval. These are amazingly easy to compute and can work well with sufficiently large datasets.
A "prediction interval" is a s... | How could I find the prediction interval of a future observation given the present dataset?
There is a conventional concept that is a close match to your question: a nonparametric prediction interval. These are amazingly easy to compute and can work well with sufficiently large datasets.
A |
48,351 | Top principal components versus most significant random forest variables | PCA maximizes variance captured by linear combinations of your input variables. There are several reasons why this might not extract useful information about your outcome variable:
Maximizing variance does not mean maximizing dispersion if your variables are not approximately normally distributed;
$>90\%$ of the varia... | Top principal components versus most significant random forest variables | PCA maximizes variance captured by linear combinations of your input variables. There are several reasons why this might not extract useful information about your outcome variable:
Maximizing varianc | Top principal components versus most significant random forest variables
PCA maximizes variance captured by linear combinations of your input variables. There are several reasons why this might not extract useful information about your outcome variable:
Maximizing variance does not mean maximizing dispersion if your v... | Top principal components versus most significant random forest variables
PCA maximizes variance captured by linear combinations of your input variables. There are several reasons why this might not extract useful information about your outcome variable:
Maximizing varianc |
48,352 | Standard error of sample variance | Looking at the variance of $\hat{\sigma}_{biased}^2$ we have
\begin{eqnarray*}
\mathrm{Var}(\hat{\sigma}_{biased}^2) &=& \mathrm{Var} \left( \dfrac{n-1}{n} \hat{\sigma}_{unbiased}^2 \right) \\
&=& \left( \dfrac{n-1}{n} \right)^2 \mathrm{Var}(\hat{\sigma}_{unbiased}^2).
\end{eqnarray*}
Since $(n-1)/n < 1$ it follows tha... | Standard error of sample variance | Looking at the variance of $\hat{\sigma}_{biased}^2$ we have
\begin{eqnarray*}
\mathrm{Var}(\hat{\sigma}_{biased}^2) &=& \mathrm{Var} \left( \dfrac{n-1}{n} \hat{\sigma}_{unbiased}^2 \right) \\
&=& \le | Standard error of sample variance
Looking at the variance of $\hat{\sigma}_{biased}^2$ we have
\begin{eqnarray*}
\mathrm{Var}(\hat{\sigma}_{biased}^2) &=& \mathrm{Var} \left( \dfrac{n-1}{n} \hat{\sigma}_{unbiased}^2 \right) \\
&=& \left( \dfrac{n-1}{n} \right)^2 \mathrm{Var}(\hat{\sigma}_{unbiased}^2).
\end{eqnarray*}
... | Standard error of sample variance
Looking at the variance of $\hat{\sigma}_{biased}^2$ we have
\begin{eqnarray*}
\mathrm{Var}(\hat{\sigma}_{biased}^2) &=& \mathrm{Var} \left( \dfrac{n-1}{n} \hat{\sigma}_{unbiased}^2 \right) \\
&=& \le |
48,353 | Is background subtraction common practice for image classification? | I tried this several times in several projects in the past, and yes, it may help if done properly. However, reliable background removal is not trivial, and it has to be carefully, manually checked for each image. And as it's not a very easy thing to do, it's not commonly done when working with neural networks and large... | Is background subtraction common practice for image classification? | I tried this several times in several projects in the past, and yes, it may help if done properly. However, reliable background removal is not trivial, and it has to be carefully, manually checked for | Is background subtraction common practice for image classification?
I tried this several times in several projects in the past, and yes, it may help if done properly. However, reliable background removal is not trivial, and it has to be carefully, manually checked for each image. And as it's not a very easy thing to do... | Is background subtraction common practice for image classification?
I tried this several times in several projects in the past, and yes, it may help if done properly. However, reliable background removal is not trivial, and it has to be carefully, manually checked for |
48,354 | Is background subtraction common practice for image classification? | I cannot comment for the low reputation. Anyway, as seen as I have read about Image Classification it is not a common practice, especially for large databases. The most common preprocessing operations are transforming the range between [0, 1] or [-1,1] (like in ResNet50V2) and resizing the images to all the same width ... | Is background subtraction common practice for image classification? | I cannot comment for the low reputation. Anyway, as seen as I have read about Image Classification it is not a common practice, especially for large databases. The most common preprocessing operations | Is background subtraction common practice for image classification?
I cannot comment for the low reputation. Anyway, as seen as I have read about Image Classification it is not a common practice, especially for large databases. The most common preprocessing operations are transforming the range between [0, 1] or [-1,1]... | Is background subtraction common practice for image classification?
I cannot comment for the low reputation. Anyway, as seen as I have read about Image Classification it is not a common practice, especially for large databases. The most common preprocessing operations |
48,355 | Does retraining a model on all available data necessarily yield a better model? | This practice derives from an understanding of the bias-variance tradeoff.
Recall that the expected test error can be broken down into three components.
$$ E \left[ \text{Test Error} \right] = \text{Bias}^2 + \text{Variance} + \text{Irreducible Error}$$
Assuming your data sets are independent random samples from a popu... | Does retraining a model on all available data necessarily yield a better model? | This practice derives from an understanding of the bias-variance tradeoff.
Recall that the expected test error can be broken down into three components.
$$ E \left[ \text{Test Error} \right] = \text{B | Does retraining a model on all available data necessarily yield a better model?
This practice derives from an understanding of the bias-variance tradeoff.
Recall that the expected test error can be broken down into three components.
$$ E \left[ \text{Test Error} \right] = \text{Bias}^2 + \text{Variance} + \text{Irreduc... | Does retraining a model on all available data necessarily yield a better model?
This practice derives from an understanding of the bias-variance tradeoff.
Recall that the expected test error can be broken down into three components.
$$ E \left[ \text{Test Error} \right] = \text{B |
48,356 | Estimate for the error of an error? | You did nothing wrong: the first value of $\delta s,$ which was obtained by an approximate method, is a close approximation to the second.
Let's compare the two expressions by using Stirling's approximation
$$\log\left(\Gamma(z)\right) \approx z \log(z) - z + \log(2\pi)/2 - \log(z)/2 + \frac{1}{12z} + O(z^{-2}).$$
and ... | Estimate for the error of an error? | You did nothing wrong: the first value of $\delta s,$ which was obtained by an approximate method, is a close approximation to the second.
Let's compare the two expressions by using Stirling's approxi | Estimate for the error of an error?
You did nothing wrong: the first value of $\delta s,$ which was obtained by an approximate method, is a close approximation to the second.
Let's compare the two expressions by using Stirling's approximation
$$\log\left(\Gamma(z)\right) \approx z \log(z) - z + \log(2\pi)/2 - \log(z)/2... | Estimate for the error of an error?
You did nothing wrong: the first value of $\delta s,$ which was obtained by an approximate method, is a close approximation to the second.
Let's compare the two expressions by using Stirling's approxi |
48,357 | Clustered standard errors - Why are SE smaller or bigger than OLS depending on cluster level? | What you observe can be explained by the correlations in the measurements within the clusters. Namely, when you select an analysis, such as OLS that does not account for these correlations, you expect that standard errors of within clusters effects to be overestimated, and standard errors of between clusters effects to... | Clustered standard errors - Why are SE smaller or bigger than OLS depending on cluster level? | What you observe can be explained by the correlations in the measurements within the clusters. Namely, when you select an analysis, such as OLS that does not account for these correlations, you expect | Clustered standard errors - Why are SE smaller or bigger than OLS depending on cluster level?
What you observe can be explained by the correlations in the measurements within the clusters. Namely, when you select an analysis, such as OLS that does not account for these correlations, you expect that standard errors of w... | Clustered standard errors - Why are SE smaller or bigger than OLS depending on cluster level?
What you observe can be explained by the correlations in the measurements within the clusters. Namely, when you select an analysis, such as OLS that does not account for these correlations, you expect |
48,358 | Clustered standard errors - Why are SE smaller or bigger than OLS depending on cluster level? | The variance inflation equation (6) on page six (adjusted for unequal cluster size below) in the Cameron and Miller paper you linked contains the intuition. If you have positive correlation in either the regressor of interest or the errors within cities (the two $\rho$s), but a negative correlation within states, that ... | Clustered standard errors - Why are SE smaller or bigger than OLS depending on cluster level? | The variance inflation equation (6) on page six (adjusted for unequal cluster size below) in the Cameron and Miller paper you linked contains the intuition. If you have positive correlation in either | Clustered standard errors - Why are SE smaller or bigger than OLS depending on cluster level?
The variance inflation equation (6) on page six (adjusted for unequal cluster size below) in the Cameron and Miller paper you linked contains the intuition. If you have positive correlation in either the regressor of interest ... | Clustered standard errors - Why are SE smaller or bigger than OLS depending on cluster level?
The variance inflation equation (6) on page six (adjusted for unequal cluster size below) in the Cameron and Miller paper you linked contains the intuition. If you have positive correlation in either |
48,359 | What are the theoretical/practical reasons to use normal distribution to initialize the weights in Neural Networks? | (1) I think restricting the weights to have mean at 0 and std at 1 can
make the weights as small as possible, which make it convenient for
regularization. Am I understanding it correctly?
No, setting them all to 0 would make them as small as possible.
(2) On the other hand, what are the theoretical/practical rea... | What are the theoretical/practical reasons to use normal distribution to initialize the weights in N | (1) I think restricting the weights to have mean at 0 and std at 1 can
make the weights as small as possible, which make it convenient for
regularization. Am I understanding it correctly?
No, set | What are the theoretical/practical reasons to use normal distribution to initialize the weights in Neural Networks?
(1) I think restricting the weights to have mean at 0 and std at 1 can
make the weights as small as possible, which make it convenient for
regularization. Am I understanding it correctly?
No, setting... | What are the theoretical/practical reasons to use normal distribution to initialize the weights in N
(1) I think restricting the weights to have mean at 0 and std at 1 can
make the weights as small as possible, which make it convenient for
regularization. Am I understanding it correctly?
No, set |
48,360 | What are the theoretical/practical reasons to use normal distribution to initialize the weights in Neural Networks? | The other answer is good (+1), but just to add to it:
(1) No, one can easily make the weights even smaller by choosing a smaller $\sigma$ than 1. I do think the fact that $L_1$ and $L_2$ weight decay are very common is related to this, in the sense that initializing with $\mu\ne 0$ would be wasteful, as the weight deca... | What are the theoretical/practical reasons to use normal distribution to initialize the weights in N | The other answer is good (+1), but just to add to it:
(1) No, one can easily make the weights even smaller by choosing a smaller $\sigma$ than 1. I do think the fact that $L_1$ and $L_2$ weight decay | What are the theoretical/practical reasons to use normal distribution to initialize the weights in Neural Networks?
The other answer is good (+1), but just to add to it:
(1) No, one can easily make the weights even smaller by choosing a smaller $\sigma$ than 1. I do think the fact that $L_1$ and $L_2$ weight decay are ... | What are the theoretical/practical reasons to use normal distribution to initialize the weights in N
The other answer is good (+1), but just to add to it:
(1) No, one can easily make the weights even smaller by choosing a smaller $\sigma$ than 1. I do think the fact that $L_1$ and $L_2$ weight decay |
48,361 | Definition of independence of two random vectors and how to show it in the jointly normal case | (1) What is the definition of independence between two random vectors $\mathbf X$ and $\mathbf Y$?
The definition of independence between two random vectors is the same as that between two ordinary random variables: Random vectors $\mathbf{x}$ and $\mathbf{y}$ are independent if and only if their joint distribution is... | Definition of independence of two random vectors and how to show it in the jointly normal case | (1) What is the definition of independence between two random vectors $\mathbf X$ and $\mathbf Y$?
The definition of independence between two random vectors is the same as that between two ordinary r | Definition of independence of two random vectors and how to show it in the jointly normal case
(1) What is the definition of independence between two random vectors $\mathbf X$ and $\mathbf Y$?
The definition of independence between two random vectors is the same as that between two ordinary random variables: Random v... | Definition of independence of two random vectors and how to show it in the jointly normal case
(1) What is the definition of independence between two random vectors $\mathbf X$ and $\mathbf Y$?
The definition of independence between two random vectors is the same as that between two ordinary r |
48,362 | Definition of independence of two random vectors and how to show it in the jointly normal case | (1) Regarding your first question, $X$ and $Y$ are independent if we can simply factorize the joint PDF as: $p_{X,Y}(x,y)=p_X(x)p_Y(y)$, irrespective of $X$ and $Y$ being vectors or not.
(2) I'm wondering where you saw that having pairwise independence of $X_i,Y_j$ for all $i,j$ pairs result in complete independence. I... | Definition of independence of two random vectors and how to show it in the jointly normal case | (1) Regarding your first question, $X$ and $Y$ are independent if we can simply factorize the joint PDF as: $p_{X,Y}(x,y)=p_X(x)p_Y(y)$, irrespective of $X$ and $Y$ being vectors or not.
(2) I'm wonde | Definition of independence of two random vectors and how to show it in the jointly normal case
(1) Regarding your first question, $X$ and $Y$ are independent if we can simply factorize the joint PDF as: $p_{X,Y}(x,y)=p_X(x)p_Y(y)$, irrespective of $X$ and $Y$ being vectors or not.
(2) I'm wondering where you saw that h... | Definition of independence of two random vectors and how to show it in the jointly normal case
(1) Regarding your first question, $X$ and $Y$ are independent if we can simply factorize the joint PDF as: $p_{X,Y}(x,y)=p_X(x)p_Y(y)$, irrespective of $X$ and $Y$ being vectors or not.
(2) I'm wonde |
48,363 | Can we apply KL divergence to the probability distributions on different domains? | KL divergence is only defined for distributions that are defined on the same domain.
In t-SNE, KL divergence is not computed between data distributions in the high- and low-dimensional spaces (this would be undefined, as above). Rather, the distributions of interest are based on neighbor probabilities. The probability ... | Can we apply KL divergence to the probability distributions on different domains? | KL divergence is only defined for distributions that are defined on the same domain.
In t-SNE, KL divergence is not computed between data distributions in the high- and low-dimensional spaces (this wo | Can we apply KL divergence to the probability distributions on different domains?
KL divergence is only defined for distributions that are defined on the same domain.
In t-SNE, KL divergence is not computed between data distributions in the high- and low-dimensional spaces (this would be undefined, as above). Rather, t... | Can we apply KL divergence to the probability distributions on different domains?
KL divergence is only defined for distributions that are defined on the same domain.
In t-SNE, KL divergence is not computed between data distributions in the high- and low-dimensional spaces (this wo |
48,364 | Find UMVUE of $\theta$ where $f_X(x\mid\theta) =\theta(1 +x)^{−(1+\theta)}I_{(0,\infty)}(x)$ | From your previous question, you already have the complete sufficient statistic:
$$T(\mathbf{X}) = \sum_{i=1}^n \ln(1+X_i).$$
The simplest way to find the UMVUE estimator for $\theta$ is to appeal to the Lehmann-Scheffé theorem, which says that any unbiased estimator of $\theta$ which is a function of $T$ is the unique... | Find UMVUE of $\theta$ where $f_X(x\mid\theta) =\theta(1 +x)^{−(1+\theta)}I_{(0,\infty)}(x)$ | From your previous question, you already have the complete sufficient statistic:
$$T(\mathbf{X}) = \sum_{i=1}^n \ln(1+X_i).$$
The simplest way to find the UMVUE estimator for $\theta$ is to appeal to | Find UMVUE of $\theta$ where $f_X(x\mid\theta) =\theta(1 +x)^{−(1+\theta)}I_{(0,\infty)}(x)$
From your previous question, you already have the complete sufficient statistic:
$$T(\mathbf{X}) = \sum_{i=1}^n \ln(1+X_i).$$
The simplest way to find the UMVUE estimator for $\theta$ is to appeal to the Lehmann-Scheffé theorem... | Find UMVUE of $\theta$ where $f_X(x\mid\theta) =\theta(1 +x)^{−(1+\theta)}I_{(0,\infty)}(x)$
From your previous question, you already have the complete sufficient statistic:
$$T(\mathbf{X}) = \sum_{i=1}^n \ln(1+X_i).$$
The simplest way to find the UMVUE estimator for $\theta$ is to appeal to |
48,365 | Beta Distribution and how it is related to this question | The integral of $f$ can be expressed as a Beta function times a hypergeometric function. This suggests $f$ is not the density of any particular Beta distribution, but that it is indeed related.
To evaluate $k$ it's simpler and more elementary to use the substitution $y = \sin^2(x),$ $\mathrm{d}y = 2\sin(x)\cos(x)\math... | Beta Distribution and how it is related to this question | The integral of $f$ can be expressed as a Beta function times a hypergeometric function. This suggests $f$ is not the density of any particular Beta distribution, but that it is indeed related.
To ev | Beta Distribution and how it is related to this question
The integral of $f$ can be expressed as a Beta function times a hypergeometric function. This suggests $f$ is not the density of any particular Beta distribution, but that it is indeed related.
To evaluate $k$ it's simpler and more elementary to use the substitu... | Beta Distribution and how it is related to this question
The integral of $f$ can be expressed as a Beta function times a hypergeometric function. This suggests $f$ is not the density of any particular Beta distribution, but that it is indeed related.
To ev |
48,366 | multivariate Student's t distribution: intuition for non-independence? | Let's look at the situation. As a point of departure we will first study the bivariate standard Normal distribution. I will do this by plotting vertical slices through its graph: these are given by the functions
$$y\to \phi(x,y)$$
for $x = 0, \pm 1/2, \pm 1, \pm 3/2, \pm 2$ (where $\phi$ is the bivariate density). ... | multivariate Student's t distribution: intuition for non-independence? | Let's look at the situation. As a point of departure we will first study the bivariate standard Normal distribution. I will do this by plotting vertical slices through its graph: these are given by | multivariate Student's t distribution: intuition for non-independence?
Let's look at the situation. As a point of departure we will first study the bivariate standard Normal distribution. I will do this by plotting vertical slices through its graph: these are given by the functions
$$y\to \phi(x,y)$$
for $x = 0, \pm ... | multivariate Student's t distribution: intuition for non-independence?
Let's look at the situation. As a point of departure we will first study the bivariate standard Normal distribution. I will do this by plotting vertical slices through its graph: these are given by |
48,367 | Deriving the Maximum Likelihood Estimation (MLE) of a parameter for an Inverse Gaussian Distribution | The full derivation of the MLEs for IID data from an inverse Gaussian distribution can be found in the answer to this related question. In your case you have added an additional layer of complication by having observable data values $t_i = u_i - x_i - \tau$ that depend on some conditioning covariates and an additional... | Deriving the Maximum Likelihood Estimation (MLE) of a parameter for an Inverse Gaussian Distribution | The full derivation of the MLEs for IID data from an inverse Gaussian distribution can be found in the answer to this related question. In your case you have added an additional layer of complication | Deriving the Maximum Likelihood Estimation (MLE) of a parameter for an Inverse Gaussian Distribution
The full derivation of the MLEs for IID data from an inverse Gaussian distribution can be found in the answer to this related question. In your case you have added an additional layer of complication by having observab... | Deriving the Maximum Likelihood Estimation (MLE) of a parameter for an Inverse Gaussian Distribution
The full derivation of the MLEs for IID data from an inverse Gaussian distribution can be found in the answer to this related question. In your case you have added an additional layer of complication |
48,368 | Higher Order of Vectorization in Backpropagation in Neural Network | You're right that that doesn't make sense as the Jacobian. Furthermore if multiplying jacobians was really how autodiff worked, any pointwise function applied on vector of length $n$ would result in a huge $n \times n$ Jacobian being created. This is not what happens in any competant autodiff implementation.
In reality... | Higher Order of Vectorization in Backpropagation in Neural Network | You're right that that doesn't make sense as the Jacobian. Furthermore if multiplying jacobians was really how autodiff worked, any pointwise function applied on vector of length $n$ would result in a | Higher Order of Vectorization in Backpropagation in Neural Network
You're right that that doesn't make sense as the Jacobian. Furthermore if multiplying jacobians was really how autodiff worked, any pointwise function applied on vector of length $n$ would result in a huge $n \times n$ Jacobian being created. This is no... | Higher Order of Vectorization in Backpropagation in Neural Network
You're right that that doesn't make sense as the Jacobian. Furthermore if multiplying jacobians was really how autodiff worked, any pointwise function applied on vector of length $n$ would result in a |
48,369 | Higher Order of Vectorization in Backpropagation in Neural Network | $\frac{\partial \mathcal{L}}{\partial W^{[2]}}$ must be 2x3 as just like dimensions of $ W^{[2]}$.
I suggest you to use the backprop formulas (and notation) given in Nielsen's book. When the networks gets bigger it is ease to follow
According to that
\begin{align*}
\delta^3 &=a^{[3]}-y \\
\delta^2 &= ((W^{[3]^{T}} ... | Higher Order of Vectorization in Backpropagation in Neural Network | $\frac{\partial \mathcal{L}}{\partial W^{[2]}}$ must be 2x3 as just like dimensions of $ W^{[2]}$.
I suggest you to use the backprop formulas (and notation) given in Nielsen's book. When the networks | Higher Order of Vectorization in Backpropagation in Neural Network
$\frac{\partial \mathcal{L}}{\partial W^{[2]}}$ must be 2x3 as just like dimensions of $ W^{[2]}$.
I suggest you to use the backprop formulas (and notation) given in Nielsen's book. When the networks gets bigger it is ease to follow
According to that
\... | Higher Order of Vectorization in Backpropagation in Neural Network
$\frac{\partial \mathcal{L}}{\partial W^{[2]}}$ must be 2x3 as just like dimensions of $ W^{[2]}$.
I suggest you to use the backprop formulas (and notation) given in Nielsen's book. When the networks |
48,370 | Higher Order of Vectorization in Backpropagation in Neural Network | I think this formula falls flat when trying to find the gradients, at least for the explanations given currently. Take a look at the last and the second last layer for example:
is a [2x1] So, to get the gradients of the weights for this layer we will have to perform }
[1x1] matrix X [2x1] matrix. How is that possible... | Higher Order of Vectorization in Backpropagation in Neural Network | I think this formula falls flat when trying to find the gradients, at least for the explanations given currently. Take a look at the last and the second last layer for example:
is a [2x1] So, to get | Higher Order of Vectorization in Backpropagation in Neural Network
I think this formula falls flat when trying to find the gradients, at least for the explanations given currently. Take a look at the last and the second last layer for example:
is a [2x1] So, to get the gradients of the weights for this layer we will h... | Higher Order of Vectorization in Backpropagation in Neural Network
I think this formula falls flat when trying to find the gradients, at least for the explanations given currently. Take a look at the last and the second last layer for example:
is a [2x1] So, to get |
48,371 | Correct understanding of De Finetti`s representation theorem | What is the proper interpretation of the parameter? The natural interpretation of the parameter $\Theta$ comes from the law-of-large numbers, which you have stated in your question. This says that the following equivalence holds almost surely:
$$\Theta = \lim_{n \rightarrow \infty} \frac{1}{n} \sum_{i=1}^n X_i.$$
From... | Correct understanding of De Finetti`s representation theorem | What is the proper interpretation of the parameter? The natural interpretation of the parameter $\Theta$ comes from the law-of-large numbers, which you have stated in your question. This says that th | Correct understanding of De Finetti`s representation theorem
What is the proper interpretation of the parameter? The natural interpretation of the parameter $\Theta$ comes from the law-of-large numbers, which you have stated in your question. This says that the following equivalence holds almost surely:
$$\Theta = \li... | Correct understanding of De Finetti`s representation theorem
What is the proper interpretation of the parameter? The natural interpretation of the parameter $\Theta$ comes from the law-of-large numbers, which you have stated in your question. This says that th |
48,372 | L1 vs L2 stability? | This is generally called "sensitivity analysis" or "stability". An excellent paper deriving bounds based on this is Stability and Generalization. The bounds of course aren't necessarily tight!
If you look at Definition 19 and the follow-up Theorems and Lemmas you can see that if something is $\sigma$-admissable then th... | L1 vs L2 stability? | This is generally called "sensitivity analysis" or "stability". An excellent paper deriving bounds based on this is Stability and Generalization. The bounds of course aren't necessarily tight!
If you | L1 vs L2 stability?
This is generally called "sensitivity analysis" or "stability". An excellent paper deriving bounds based on this is Stability and Generalization. The bounds of course aren't necessarily tight!
If you look at Definition 19 and the follow-up Theorems and Lemmas you can see that if something is $\sigma... | L1 vs L2 stability?
This is generally called "sensitivity analysis" or "stability". An excellent paper deriving bounds based on this is Stability and Generalization. The bounds of course aren't necessarily tight!
If you |
48,373 | L1 vs L2 stability? | L1 norm is based on minimising Least Absolute Deviation, with absolute deviation being calculated:
$$ AD = \Sigma^n_{i=1} |y_i-f(x_i)| $$
L2 norm is based on least squared deviation, with squared deviation being calculated:
$$ LSD = \Sigma^n_{i=1} (y_i-f(x_i))^2 $$
So what is the difference for small vs large nudges? ... | L1 vs L2 stability? | L1 norm is based on minimising Least Absolute Deviation, with absolute deviation being calculated:
$$ AD = \Sigma^n_{i=1} |y_i-f(x_i)| $$
L2 norm is based on least squared deviation, with squared devi | L1 vs L2 stability?
L1 norm is based on minimising Least Absolute Deviation, with absolute deviation being calculated:
$$ AD = \Sigma^n_{i=1} |y_i-f(x_i)| $$
L2 norm is based on least squared deviation, with squared deviation being calculated:
$$ LSD = \Sigma^n_{i=1} (y_i-f(x_i))^2 $$
So what is the difference for sma... | L1 vs L2 stability?
L1 norm is based on minimising Least Absolute Deviation, with absolute deviation being calculated:
$$ AD = \Sigma^n_{i=1} |y_i-f(x_i)| $$
L2 norm is based on least squared deviation, with squared devi |
48,374 | How to forecast integer time series in R? | When you are looking for suitable packages, use the CRAN task views. In this case, the time series task view contains the following line:
Count time series models are handled in the tscount and acp packages. ZIM provides for Zero-Inflated Models for count time series. tsintermittent implements various models for analy... | How to forecast integer time series in R? | When you are looking for suitable packages, use the CRAN task views. In this case, the time series task view contains the following line:
Count time series models are handled in the tscount and acp p | How to forecast integer time series in R?
When you are looking for suitable packages, use the CRAN task views. In this case, the time series task view contains the following line:
Count time series models are handled in the tscount and acp packages. ZIM provides for Zero-Inflated Models for count time series. tsinterm... | How to forecast integer time series in R?
When you are looking for suitable packages, use the CRAN task views. In this case, the time series task view contains the following line:
Count time series models are handled in the tscount and acp p |
48,375 | what is the difference between binary cross entropy and categorical cross entropy? [duplicate] | I would like to expand on ARMAN's answer:
Not getting into formulas the biggest difference would be that categorical crossentropy is based on the assumption that only 1 class is correct out of all possible ones (so output should be something like [0,0,0,1,0] if the rating is 4) while binary_crossentropy works on each ... | what is the difference between binary cross entropy and categorical cross entropy? [duplicate] | I would like to expand on ARMAN's answer:
Not getting into formulas the biggest difference would be that categorical crossentropy is based on the assumption that only 1 class is correct out of all po | what is the difference between binary cross entropy and categorical cross entropy? [duplicate]
I would like to expand on ARMAN's answer:
Not getting into formulas the biggest difference would be that categorical crossentropy is based on the assumption that only 1 class is correct out of all possible ones (so output sh... | what is the difference between binary cross entropy and categorical cross entropy? [duplicate]
I would like to expand on ARMAN's answer:
Not getting into formulas the biggest difference would be that categorical crossentropy is based on the assumption that only 1 class is correct out of all po |
48,376 | Meaning of variance term in confidence interval for Multiple Linear Regression | MSE measures the variance of the error. To be clear -- that's the variance of the model errors, not the variance of the data. You can see this by looking at $SSE = (y_i - f(x_i))^2$. $SSE$ gives the squared difference between the observed and fitted values. Linear regression models are fit by minimizing $MSE$. From the... | Meaning of variance term in confidence interval for Multiple Linear Regression | MSE measures the variance of the error. To be clear -- that's the variance of the model errors, not the variance of the data. You can see this by looking at $SSE = (y_i - f(x_i))^2$. $SSE$ gives the s | Meaning of variance term in confidence interval for Multiple Linear Regression
MSE measures the variance of the error. To be clear -- that's the variance of the model errors, not the variance of the data. You can see this by looking at $SSE = (y_i - f(x_i))^2$. $SSE$ gives the squared difference between the observed an... | Meaning of variance term in confidence interval for Multiple Linear Regression
MSE measures the variance of the error. To be clear -- that's the variance of the model errors, not the variance of the data. You can see this by looking at $SSE = (y_i - f(x_i))^2$. $SSE$ gives the s |
48,377 | Meaning of variance term in confidence interval for Multiple Linear Regression | The value $\sigma^2 = \mathbb{V}(\varepsilon_i)$ is the error variance in the regression model, and the variance result in your post is a consequence of the underlying variance of the OLS coefficient estimator:
$$\mathbb{V}(\hat{\boldsymbol{\beta}}) = \sigma^2 (\mathbf{x}^\text{T} \mathbf{x})^{-1}.$$
Since $\hat{y}(\ma... | Meaning of variance term in confidence interval for Multiple Linear Regression | The value $\sigma^2 = \mathbb{V}(\varepsilon_i)$ is the error variance in the regression model, and the variance result in your post is a consequence of the underlying variance of the OLS coefficient | Meaning of variance term in confidence interval for Multiple Linear Regression
The value $\sigma^2 = \mathbb{V}(\varepsilon_i)$ is the error variance in the regression model, and the variance result in your post is a consequence of the underlying variance of the OLS coefficient estimator:
$$\mathbb{V}(\hat{\boldsymbol{... | Meaning of variance term in confidence interval for Multiple Linear Regression
The value $\sigma^2 = \mathbb{V}(\varepsilon_i)$ is the error variance in the regression model, and the variance result in your post is a consequence of the underlying variance of the OLS coefficient |
48,378 | Is the maximum bound of Euclidean distance between two probability distributions equal to $\sqrt{2}$? | $d_{xy}^2 = \sum{(x-y)^2} = \sum x^2 + \sum y^2 - 2\sum xy$.
Given that in probability vectors all values are nonnegative, $d^2$ is max when the last term is zero. Then $d^2 = \sum x^2 + \sum y^2$.
In a probability vector all values (which are between 0 and 1) sum up to 1, $\sum x = \sum y = 1$. In such a vector, its t... | Is the maximum bound of Euclidean distance between two probability distributions equal to $\sqrt{2}$ | $d_{xy}^2 = \sum{(x-y)^2} = \sum x^2 + \sum y^2 - 2\sum xy$.
Given that in probability vectors all values are nonnegative, $d^2$ is max when the last term is zero. Then $d^2 = \sum x^2 + \sum y^2$.
In | Is the maximum bound of Euclidean distance between two probability distributions equal to $\sqrt{2}$?
$d_{xy}^2 = \sum{(x-y)^2} = \sum x^2 + \sum y^2 - 2\sum xy$.
Given that in probability vectors all values are nonnegative, $d^2$ is max when the last term is zero. Then $d^2 = \sum x^2 + \sum y^2$.
In a probability vec... | Is the maximum bound of Euclidean distance between two probability distributions equal to $\sqrt{2}$
$d_{xy}^2 = \sum{(x-y)^2} = \sum x^2 + \sum y^2 - 2\sum xy$.
Given that in probability vectors all values are nonnegative, $d^2$ is max when the last term is zero. Then $d^2 = \sum x^2 + \sum y^2$.
In |
48,379 | Is the maximum bound of Euclidean distance between two probability distributions equal to $\sqrt{2}$? | Yes, in the 2 category case, $\sqrt{2}$ is the maxima achieved at both $A=(1,0), B=(0,1)$ and $A=(0,1), B=(1,0)$.
I'll supply a heuristic argument which should be straightforward to rigorously demonstrate. Define $d_1 = A_1 - B_1$ and $d_2 = A_2 - B_2$. Then euclidean distance can be thought of as circular contours aro... | Is the maximum bound of Euclidean distance between two probability distributions equal to $\sqrt{2}$ | Yes, in the 2 category case, $\sqrt{2}$ is the maxima achieved at both $A=(1,0), B=(0,1)$ and $A=(0,1), B=(1,0)$.
I'll supply a heuristic argument which should be straightforward to rigorously demonst | Is the maximum bound of Euclidean distance between two probability distributions equal to $\sqrt{2}$?
Yes, in the 2 category case, $\sqrt{2}$ is the maxima achieved at both $A=(1,0), B=(0,1)$ and $A=(0,1), B=(1,0)$.
I'll supply a heuristic argument which should be straightforward to rigorously demonstrate. Define $d_1 ... | Is the maximum bound of Euclidean distance between two probability distributions equal to $\sqrt{2}$
Yes, in the 2 category case, $\sqrt{2}$ is the maxima achieved at both $A=(1,0), B=(0,1)$ and $A=(0,1), B=(1,0)$.
I'll supply a heuristic argument which should be straightforward to rigorously demonst |
48,380 | Do I need to discard 90% of experiments so that the sample is independent? | You definitely do not need to discard 90% of your observations. The passage talks about sampling from a (finite) population. If your population had 10,000 units in it, the passage recommends you draw a sample of size less than 1,000. My intuition on the reason for this is doing so yields properties of the random sample... | Do I need to discard 90% of experiments so that the sample is independent? | You definitely do not need to discard 90% of your observations. The passage talks about sampling from a (finite) population. If your population had 10,000 units in it, the passage recommends you draw | Do I need to discard 90% of experiments so that the sample is independent?
You definitely do not need to discard 90% of your observations. The passage talks about sampling from a (finite) population. If your population had 10,000 units in it, the passage recommends you draw a sample of size less than 1,000. My intuitio... | Do I need to discard 90% of experiments so that the sample is independent?
You definitely do not need to discard 90% of your observations. The passage talks about sampling from a (finite) population. If your population had 10,000 units in it, the passage recommends you draw |
48,381 | How should Type II SS be calculated in a mixed model? | The difference between the Type II tests from car::Anova and the anova method for lmerTest is due to how continuous explanatory variables are handled. The first passus in the Details section of help(Anova) describes how Anova handles them:
The designations "type-II" and "type-III" are borrowed from SAS, but
the defi... | How should Type II SS be calculated in a mixed model? | The difference between the Type II tests from car::Anova and the anova method for lmerTest is due to how continuous explanatory variables are handled. The first passus in the Details section of help(A | How should Type II SS be calculated in a mixed model?
The difference between the Type II tests from car::Anova and the anova method for lmerTest is due to how continuous explanatory variables are handled. The first passus in the Details section of help(Anova) describes how Anova handles them:
The designations "type-II... | How should Type II SS be calculated in a mixed model?
The difference between the Type II tests from car::Anova and the anova method for lmerTest is due to how continuous explanatory variables are handled. The first passus in the Details section of help(A |
48,382 | How should Type II SS be calculated in a mixed model? | The difference is because A is not centered around zero. Not sure yet why this matters though, or if it should (it doesn't seem that it should...); other answers which explore this more fully are most welcome.
d$Ac <- d$A - mean(d$A)
m2 <- lmer(Y ~ Ac*B*C + (1|ID), data=d)
anova(m2, type=1)
#> Type I Analysis of Varia... | How should Type II SS be calculated in a mixed model? | The difference is because A is not centered around zero. Not sure yet why this matters though, or if it should (it doesn't seem that it should...); other answers which explore this more fully are most | How should Type II SS be calculated in a mixed model?
The difference is because A is not centered around zero. Not sure yet why this matters though, or if it should (it doesn't seem that it should...); other answers which explore this more fully are most welcome.
d$Ac <- d$A - mean(d$A)
m2 <- lmer(Y ~ Ac*B*C + (1|ID), ... | How should Type II SS be calculated in a mixed model?
The difference is because A is not centered around zero. Not sure yet why this matters though, or if it should (it doesn't seem that it should...); other answers which explore this more fully are most |
48,383 | Intuition about the deep meaning of Bayesian priors and its influence on posteriors | Your statement echoes Jaynes. He said
When we look at these problems on a sufficiently fundamental level and realize how careful one must be to specify the prior information before we have a well-posed problem, it becomes evident that there is, in fact, no logical difference between (3.51) and (4.3); exactly the same... | Intuition about the deep meaning of Bayesian priors and its influence on posteriors | Your statement echoes Jaynes. He said
When we look at these problems on a sufficiently fundamental level and realize how careful one must be to specify the prior information before we have a well-po | Intuition about the deep meaning of Bayesian priors and its influence on posteriors
Your statement echoes Jaynes. He said
When we look at these problems on a sufficiently fundamental level and realize how careful one must be to specify the prior information before we have a well-posed problem, it becomes evident that... | Intuition about the deep meaning of Bayesian priors and its influence on posteriors
Your statement echoes Jaynes. He said
When we look at these problems on a sufficiently fundamental level and realize how careful one must be to specify the prior information before we have a well-po |
48,384 | Intuition about the deep meaning of Bayesian priors and its influence on posteriors | This is how I read your question -- "Why are priors given arbitrary values when they have a bearing on the calculated posterior?"
Note: I come from a physics background -- please let me know if you think I am using some terms wrong.
I shall pose a series of atomic questions and answer them as I understand them from th... | Intuition about the deep meaning of Bayesian priors and its influence on posteriors | This is how I read your question -- "Why are priors given arbitrary values when they have a bearing on the calculated posterior?"
Note: I come from a physics background -- please let me know if you th | Intuition about the deep meaning of Bayesian priors and its influence on posteriors
This is how I read your question -- "Why are priors given arbitrary values when they have a bearing on the calculated posterior?"
Note: I come from a physics background -- please let me know if you think I am using some terms wrong.
I ... | Intuition about the deep meaning of Bayesian priors and its influence on posteriors
This is how I read your question -- "Why are priors given arbitrary values when they have a bearing on the calculated posterior?"
Note: I come from a physics background -- please let me know if you th |
48,385 | Intuition about the deep meaning of Bayesian priors and its influence on posteriors | My possibly idiosyncratic view is as follows. If we had an exact, fully-known, prior distribution on the parameters, possibly belief-based, and we knew the true likelihood function, the Bayesian paradigm gives us the optimal way of updating that prior with the likelihood to get a posterior. In real life we don't have... | Intuition about the deep meaning of Bayesian priors and its influence on posteriors | My possibly idiosyncratic view is as follows. If we had an exact, fully-known, prior distribution on the parameters, possibly belief-based, and we knew the true likelihood function, the Bayesian para | Intuition about the deep meaning of Bayesian priors and its influence on posteriors
My possibly idiosyncratic view is as follows. If we had an exact, fully-known, prior distribution on the parameters, possibly belief-based, and we knew the true likelihood function, the Bayesian paradigm gives us the optimal way of upd... | Intuition about the deep meaning of Bayesian priors and its influence on posteriors
My possibly idiosyncratic view is as follows. If we had an exact, fully-known, prior distribution on the parameters, possibly belief-based, and we knew the true likelihood function, the Bayesian para |
48,386 | What is the meaning of fuzz factor? | The epsilon in Keras is a small floating point number used to generally avoid mistakes like divide by zero. An example of its usage is here in the Keras code for calculation of mean absolute percentage error https://github.com/keras-team/keras/blob/c67adf1765d600737b0606fd3fde48045413dee4/keras/losses.py#L22
from . imp... | What is the meaning of fuzz factor? | The epsilon in Keras is a small floating point number used to generally avoid mistakes like divide by zero. An example of its usage is here in the Keras code for calculation of mean absolute percentag | What is the meaning of fuzz factor?
The epsilon in Keras is a small floating point number used to generally avoid mistakes like divide by zero. An example of its usage is here in the Keras code for calculation of mean absolute percentage error https://github.com/keras-team/keras/blob/c67adf1765d600737b0606fd3fde4804541... | What is the meaning of fuzz factor?
The epsilon in Keras is a small floating point number used to generally avoid mistakes like divide by zero. An example of its usage is here in the Keras code for calculation of mean absolute percentag |
48,387 | AUC for random classifier in case of unbalanced dataset | A random classifier gives AUC 0.5 in expectation regardless of class balance.
@article{Fawcett:2006:IRA:1159473.1159475,
author = {Fawcett, Tom},
title = {An Introduction to ROC Analysis},
journal = {Pattern Recogn. Lett.},
issue_date = {June 2006},
volume = {27},
number = {8},
month = jun,
year = {2006},
iss... | AUC for random classifier in case of unbalanced dataset | A random classifier gives AUC 0.5 in expectation regardless of class balance.
@article{Fawcett:2006:IRA:1159473.1159475,
author = {Fawcett, Tom},
title = {An Introduction to ROC Analysis},
journal | AUC for random classifier in case of unbalanced dataset
A random classifier gives AUC 0.5 in expectation regardless of class balance.
@article{Fawcett:2006:IRA:1159473.1159475,
author = {Fawcett, Tom},
title = {An Introduction to ROC Analysis},
journal = {Pattern Recogn. Lett.},
issue_date = {June 2006},
volume =... | AUC for random classifier in case of unbalanced dataset
A random classifier gives AUC 0.5 in expectation regardless of class balance.
@article{Fawcett:2006:IRA:1159473.1159475,
author = {Fawcett, Tom},
title = {An Introduction to ROC Analysis},
journal |
48,388 | AUC for random classifier in case of unbalanced dataset | Yes, but see bellow. One of the advantages of AUC is precisely that it measures the classification accuracy regardless of how many positives and negatives there are.
AUC is the Area Under the ROC curve. The ROC curve plots the False Positive Rate against the True Positive Rate, with the False Positive Rate being the ... | AUC for random classifier in case of unbalanced dataset | Yes, but see bellow. One of the advantages of AUC is precisely that it measures the classification accuracy regardless of how many positives and negatives there are.
AUC is the Area Under the ROC cu | AUC for random classifier in case of unbalanced dataset
Yes, but see bellow. One of the advantages of AUC is precisely that it measures the classification accuracy regardless of how many positives and negatives there are.
AUC is the Area Under the ROC curve. The ROC curve plots the False Positive Rate against the Tru... | AUC for random classifier in case of unbalanced dataset
Yes, but see bellow. One of the advantages of AUC is precisely that it measures the classification accuracy regardless of how many positives and negatives there are.
AUC is the Area Under the ROC cu |
48,389 | AUC for random classifier in case of unbalanced dataset | When there is a class imbalance, we must seek the business domain experts (if we are not) and see what they are after. are they after negative or positive data points. once that is freezed, and a go ahead for fixing a class imbalance, there are 7 major techniques to hanle.. please refer https://www.kdnuggets.com/2017/0... | AUC for random classifier in case of unbalanced dataset | When there is a class imbalance, we must seek the business domain experts (if we are not) and see what they are after. are they after negative or positive data points. once that is freezed, and a go a | AUC for random classifier in case of unbalanced dataset
When there is a class imbalance, we must seek the business domain experts (if we are not) and see what they are after. are they after negative or positive data points. once that is freezed, and a go ahead for fixing a class imbalance, there are 7 major techniques ... | AUC for random classifier in case of unbalanced dataset
When there is a class imbalance, we must seek the business domain experts (if we are not) and see what they are after. are they after negative or positive data points. once that is freezed, and a go a |
48,390 | Expecations calculation question (regarding the autocovariance sequence of the square of a zero-mean stationary proces) | Your lecturer is wrong, as a simple counterexample will show.
Consider the process $(X_t\mid t\in\mathbb Z)$ where
$$(X_t) = (\ldots,-1,1,-1,1,\ldots) = ((-1)^t)$$
with probability $1/2$ and otherwise
$$(X_t) = (\ldots,1,-1,1,-1,\ldots) = (-(-1)^t).$$
Simple calculations (using nothing more than the definitions of exp... | Expecations calculation question (regarding the autocovariance sequence of the square of a zero-mean | Your lecturer is wrong, as a simple counterexample will show.
Consider the process $(X_t\mid t\in\mathbb Z)$ where
$$(X_t) = (\ldots,-1,1,-1,1,\ldots) = ((-1)^t)$$
with probability $1/2$ and otherwis | Expecations calculation question (regarding the autocovariance sequence of the square of a zero-mean stationary proces)
Your lecturer is wrong, as a simple counterexample will show.
Consider the process $(X_t\mid t\in\mathbb Z)$ where
$$(X_t) = (\ldots,-1,1,-1,1,\ldots) = ((-1)^t)$$
with probability $1/2$ and otherwis... | Expecations calculation question (regarding the autocovariance sequence of the square of a zero-mean
Your lecturer is wrong, as a simple counterexample will show.
Consider the process $(X_t\mid t\in\mathbb Z)$ where
$$(X_t) = (\ldots,-1,1,-1,1,\ldots) = ((-1)^t)$$
with probability $1/2$ and otherwis |
48,391 | Does Percent Change Difference A Time Series | First of all, note that stationarity and differencing come up in the context of ARMA and ARIMA models (see here and here). Other forecasting models, such as exponential smoothing, don't require stationary data.
As a toy example, I think of gdp and percent change in gdp.
In the examples you link to, the percent chang... | Does Percent Change Difference A Time Series | First of all, note that stationarity and differencing come up in the context of ARMA and ARIMA models (see here and here). Other forecasting models, such as exponential smoothing, don't require statio | Does Percent Change Difference A Time Series
First of all, note that stationarity and differencing come up in the context of ARMA and ARIMA models (see here and here). Other forecasting models, such as exponential smoothing, don't require stationary data.
As a toy example, I think of gdp and percent change in gdp.
I... | Does Percent Change Difference A Time Series
First of all, note that stationarity and differencing come up in the context of ARMA and ARIMA models (see here and here). Other forecasting models, such as exponential smoothing, don't require statio |
48,392 | Does Percent Change Difference A Time Series | Differencing in GDP is quite popular, though it's not the only way to deal with nonstationarity in ARMA or regression. The jury's out on whether the log GPD series is unit root or a trend, i.e.
$$\Delta \ln \mathrm{GDP}_t=X_t\beta+\varepsilon_t\\\varepsilon_t\sim\mathcal N(0,\sigma^2)$$
vs.
$$\ln \mathrm{GDP}_t=X\beta_... | Does Percent Change Difference A Time Series | Differencing in GDP is quite popular, though it's not the only way to deal with nonstationarity in ARMA or regression. The jury's out on whether the log GPD series is unit root or a trend, i.e.
$$\Del | Does Percent Change Difference A Time Series
Differencing in GDP is quite popular, though it's not the only way to deal with nonstationarity in ARMA or regression. The jury's out on whether the log GPD series is unit root or a trend, i.e.
$$\Delta \ln \mathrm{GDP}_t=X_t\beta+\varepsilon_t\\\varepsilon_t\sim\mathcal N(0... | Does Percent Change Difference A Time Series
Differencing in GDP is quite popular, though it's not the only way to deal with nonstationarity in ARMA or regression. The jury's out on whether the log GPD series is unit root or a trend, i.e.
$$\Del |
48,393 | Why are the results of R's ccf and SciPy's correlate different? | The difference is due to different definitions of cross-correlation and autocorrelation in different domains.
See Wikipedia's article on autocorrelation for more information, but here is the gist. In statistics, autocorrelation is defined as Pearson correlation of the signal with itself at different time lags. In sign... | Why are the results of R's ccf and SciPy's correlate different? | The difference is due to different definitions of cross-correlation and autocorrelation in different domains.
See Wikipedia's article on autocorrelation for more information, but here is the gist. In | Why are the results of R's ccf and SciPy's correlate different?
The difference is due to different definitions of cross-correlation and autocorrelation in different domains.
See Wikipedia's article on autocorrelation for more information, but here is the gist. In statistics, autocorrelation is defined as Pearson corre... | Why are the results of R's ccf and SciPy's correlate different?
The difference is due to different definitions of cross-correlation and autocorrelation in different domains.
See Wikipedia's article on autocorrelation for more information, but here is the gist. In |
48,394 | Does white noise imply wide-sense stationary? | Well, this depends on your definition of white noise. This question asks for that definition.
One answer gives:
A white noise process is a random process of random variables that are uncorrelated, have mean zero, and a finite variance. Formally, $X(t)$ is a white noise process if
$E(X(t))=0,E(X(t)^2)=S^2$, and $E(X(... | Does white noise imply wide-sense stationary? | Well, this depends on your definition of white noise. This question asks for that definition.
One answer gives:
A white noise process is a random process of random variables that are uncorrelated, ha | Does white noise imply wide-sense stationary?
Well, this depends on your definition of white noise. This question asks for that definition.
One answer gives:
A white noise process is a random process of random variables that are uncorrelated, have mean zero, and a finite variance. Formally, $X(t)$ is a white noise pro... | Does white noise imply wide-sense stationary?
Well, this depends on your definition of white noise. This question asks for that definition.
One answer gives:
A white noise process is a random process of random variables that are uncorrelated, ha |
48,395 | Does white noise imply wide-sense stationary? | White noise has the properties that you state, but those properties are not the properties that define white noise. As Michael Chernick's comment points out, a (discrete-time) white noise process is a collection of independent identically distributed zero-mean random variables, one for each time instant under considera... | Does white noise imply wide-sense stationary? | White noise has the properties that you state, but those properties are not the properties that define white noise. As Michael Chernick's comment points out, a (discrete-time) white noise process is a | Does white noise imply wide-sense stationary?
White noise has the properties that you state, but those properties are not the properties that define white noise. As Michael Chernick's comment points out, a (discrete-time) white noise process is a collection of independent identically distributed zero-mean random variab... | Does white noise imply wide-sense stationary?
White noise has the properties that you state, but those properties are not the properties that define white noise. As Michael Chernick's comment points out, a (discrete-time) white noise process is a |
48,396 | Does white noise imply wide-sense stationary? | Wise sense stationarity implies weak or covariance stationarity, i.e only the first two moments(mean and variance) are time-invariant or constant. A white noise time series in its simplest form has 0 mean, constant variance and is serially uncorrelated. Hence white noise implies wide-sense stationarity | Does white noise imply wide-sense stationary? | Wise sense stationarity implies weak or covariance stationarity, i.e only the first two moments(mean and variance) are time-invariant or constant. A white noise time series in its simplest form has 0 | Does white noise imply wide-sense stationary?
Wise sense stationarity implies weak or covariance stationarity, i.e only the first two moments(mean and variance) are time-invariant or constant. A white noise time series in its simplest form has 0 mean, constant variance and is serially uncorrelated. Hence white noise im... | Does white noise imply wide-sense stationary?
Wise sense stationarity implies weak or covariance stationarity, i.e only the first two moments(mean and variance) are time-invariant or constant. A white noise time series in its simplest form has 0 |
48,397 | Why may results from model with interaction term and stratified model be different? | In general, the stratified model requires more power to estimate, is more flexible and general, but harder to draw inference from. You cannot directly calculate a $p$-value. You must either use a path-model, bootstrap or permutation test, or the $\delta$-method to obtain standard errors for the difference in regression... | Why may results from model with interaction term and stratified model be different? | In general, the stratified model requires more power to estimate, is more flexible and general, but harder to draw inference from. You cannot directly calculate a $p$-value. You must either use a path | Why may results from model with interaction term and stratified model be different?
In general, the stratified model requires more power to estimate, is more flexible and general, but harder to draw inference from. You cannot directly calculate a $p$-value. You must either use a path-model, bootstrap or permutation tes... | Why may results from model with interaction term and stratified model be different?
In general, the stratified model requires more power to estimate, is more flexible and general, but harder to draw inference from. You cannot directly calculate a $p$-value. You must either use a path |
48,398 | Why may results from model with interaction term and stratified model be different? | I delved quite into this issue and ended up putting out a paper on it, which may be helpful to you.
https://ehp.niehs.nih.gov/doi/10.1289/EHP334
Basically, the stratified and product-term models encode different assumptions about the covariates. If you were to include product terms between the modifier and all covaria... | Why may results from model with interaction term and stratified model be different? | I delved quite into this issue and ended up putting out a paper on it, which may be helpful to you.
https://ehp.niehs.nih.gov/doi/10.1289/EHP334
Basically, the stratified and product-term models enco | Why may results from model with interaction term and stratified model be different?
I delved quite into this issue and ended up putting out a paper on it, which may be helpful to you.
https://ehp.niehs.nih.gov/doi/10.1289/EHP334
Basically, the stratified and product-term models encode different assumptions about the c... | Why may results from model with interaction term and stratified model be different?
I delved quite into this issue and ended up putting out a paper on it, which may be helpful to you.
https://ehp.niehs.nih.gov/doi/10.1289/EHP334
Basically, the stratified and product-term models enco |
48,399 | What is the difference among stochastic, batch and mini-batch learning styles? | Yes, your understanding is correct.
In the case of batch or mini-batch back-propagation we really use the "average ....
We should use the average gradient.
However, you can choose the learning rate and account for averaging. If you use sum, the division term can be subsumed in the learning rate however, learning ra... | What is the difference among stochastic, batch and mini-batch learning styles? | Yes, your understanding is correct.
In the case of batch or mini-batch back-propagation we really use the "average ....
We should use the average gradient.
However, you can choose the learning rat | What is the difference among stochastic, batch and mini-batch learning styles?
Yes, your understanding is correct.
In the case of batch or mini-batch back-propagation we really use the "average ....
We should use the average gradient.
However, you can choose the learning rate and account for averaging. If you use s... | What is the difference among stochastic, batch and mini-batch learning styles?
Yes, your understanding is correct.
In the case of batch or mini-batch back-propagation we really use the "average ....
We should use the average gradient.
However, you can choose the learning rat |
48,400 | Is convergence in probability equivalent to "almost surely... something" | The answer is no: there is no such property. Any property of the form "a.s. something" that implies convergence in probability also implies a.s. convergence, hence cannot be equivalent to convergence in probability.
Proof:
Write $X=(X_n)_{n\in\mathbb{N}}$. Let's assume all variables $X_n$ are
binary (in $\{0;1\}$)... | Is convergence in probability equivalent to "almost surely... something" | The answer is no: there is no such property. Any property of the form "a.s. something" that implies convergence in probability also implies a.s. convergence, hence cannot be equivalent to convergence | Is convergence in probability equivalent to "almost surely... something"
The answer is no: there is no such property. Any property of the form "a.s. something" that implies convergence in probability also implies a.s. convergence, hence cannot be equivalent to convergence in probability.
Proof:
Write $X=(X_n)_{n\in\... | Is convergence in probability equivalent to "almost surely... something"
The answer is no: there is no such property. Any property of the form "a.s. something" that implies convergence in probability also implies a.s. convergence, hence cannot be equivalent to convergence |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.