idx int64 1 56k | question stringlengths 15 155 | answer stringlengths 2 29.2k ⌀ | question_cut stringlengths 15 100 | answer_cut stringlengths 2 200 ⌀ | conversation stringlengths 47 29.3k | conversation_cut stringlengths 47 301 |
|---|---|---|---|---|---|---|
46,601 | Are all neural network activation functions differentiable? | No! For example, ReLU, which is a widely used activation function, is not differentiable in $z=0$. But they are usually non-differentiable at only a small number of points and they have right derivative and left derivatives at these points. We usually use one of the one-side derivatives. This is rational since digital ... | Are all neural network activation functions differentiable? | No! For example, ReLU, which is a widely used activation function, is not differentiable in $z=0$. But they are usually non-differentiable at only a small number of points and they have right derivati | Are all neural network activation functions differentiable?
No! For example, ReLU, which is a widely used activation function, is not differentiable in $z=0$. But they are usually non-differentiable at only a small number of points and they have right derivative and left derivatives at these points. We usually use one ... | Are all neural network activation functions differentiable?
No! For example, ReLU, which is a widely used activation function, is not differentiable in $z=0$. But they are usually non-differentiable at only a small number of points and they have right derivati |
46,602 | Are all neural network activation functions differentiable? | If you're going to use gradient descent to learn parameters, you need not only the activation functions to be differential almost everywhere, but ideally the gradient should be non-zero for large parts of the domain. It is not a strict requirement that the gradient be non-0 almost everywhere. For example ReLU has gradi... | Are all neural network activation functions differentiable? | If you're going to use gradient descent to learn parameters, you need not only the activation functions to be differential almost everywhere, but ideally the gradient should be non-zero for large part | Are all neural network activation functions differentiable?
If you're going to use gradient descent to learn parameters, you need not only the activation functions to be differential almost everywhere, but ideally the gradient should be non-zero for large parts of the domain. It is not a strict requirement that the gra... | Are all neural network activation functions differentiable?
If you're going to use gradient descent to learn parameters, you need not only the activation functions to be differential almost everywhere, but ideally the gradient should be non-zero for large part |
46,603 | What's stopping a gradient from making a probability negative? | There are a couple issues with the expressions in this question. I'll address these, then answer the question down below.
Issues with expressions
In your expression, $p(x \mid \theta)$ will always be 0 if $x=1$. It should be $\theta^{x_d} (1 - \theta)^{1-x_d}$ rather than $\theta^{x_d} (1 - \theta^{1-x_d})$. No need to... | What's stopping a gradient from making a probability negative? | There are a couple issues with the expressions in this question. I'll address these, then answer the question down below.
Issues with expressions
In your expression, $p(x \mid \theta)$ will always be | What's stopping a gradient from making a probability negative?
There are a couple issues with the expressions in this question. I'll address these, then answer the question down below.
Issues with expressions
In your expression, $p(x \mid \theta)$ will always be 0 if $x=1$. It should be $\theta^{x_d} (1 - \theta)^{1-x_... | What's stopping a gradient from making a probability negative?
There are a couple issues with the expressions in this question. I'll address these, then answer the question down below.
Issues with expressions
In your expression, $p(x \mid \theta)$ will always be |
46,604 | Approaches for comparing visual representation of two distributions with unequal sample sizes | If you really need to compare histograms at different sample sizes, scale them both to area 1 (i.e. to be density estimates).
However, as Nick suggested in comments, there are other ways of comparing the distributions that don't require binning.
You could plot ecdfs, or a pair of theoretical QQ plots on the same axes ... | Approaches for comparing visual representation of two distributions with unequal sample sizes | If you really need to compare histograms at different sample sizes, scale them both to area 1 (i.e. to be density estimates).
However, as Nick suggested in comments, there are other ways of comparing | Approaches for comparing visual representation of two distributions with unequal sample sizes
If you really need to compare histograms at different sample sizes, scale them both to area 1 (i.e. to be density estimates).
However, as Nick suggested in comments, there are other ways of comparing the distributions that do... | Approaches for comparing visual representation of two distributions with unequal sample sizes
If you really need to compare histograms at different sample sizes, scale them both to area 1 (i.e. to be density estimates).
However, as Nick suggested in comments, there are other ways of comparing |
46,605 | Approaches for comparing visual representation of two distributions with unequal sample sizes | Because you asked about histograms, I'm assuming you are interested in comparing the shapes of two distributions to see how similar they are. This is distinct from trying to visualize other aspects of the distributions, such as whether their means differ.
In general, histograms are a blunt tool for assessing the sha... | Approaches for comparing visual representation of two distributions with unequal sample sizes | Because you asked about histograms, I'm assuming you are interested in comparing the shapes of two distributions to see how similar they are. This is distinct from trying to visualize other aspects o | Approaches for comparing visual representation of two distributions with unequal sample sizes
Because you asked about histograms, I'm assuming you are interested in comparing the shapes of two distributions to see how similar they are. This is distinct from trying to visualize other aspects of the distributions, such ... | Approaches for comparing visual representation of two distributions with unequal sample sizes
Because you asked about histograms, I'm assuming you are interested in comparing the shapes of two distributions to see how similar they are. This is distinct from trying to visualize other aspects o |
46,606 | Imputation methods for time series data | Your approach sounds very theoretical.
Did you analyze the imputations of the packages you mentioned?
Often imputation packages have requirements (e.g. MCAR data), but will still do a reasonable good job on data not fulfilling these conditions.
Only a actual test and comparison of algorithms will show you which one i... | Imputation methods for time series data | Your approach sounds very theoretical.
Did you analyze the imputations of the packages you mentioned?
Often imputation packages have requirements (e.g. MCAR data), but will still do a reasonable goo | Imputation methods for time series data
Your approach sounds very theoretical.
Did you analyze the imputations of the packages you mentioned?
Often imputation packages have requirements (e.g. MCAR data), but will still do a reasonable good job on data not fulfilling these conditions.
Only a actual test and comparison... | Imputation methods for time series data
Your approach sounds very theoretical.
Did you analyze the imputations of the packages you mentioned?
Often imputation packages have requirements (e.g. MCAR data), but will still do a reasonable goo |
46,607 | Upper case (P) or lower case (p) to denote p-values and probabilities in frequentist and Bayesian statistics | Notationally: $P(\cdot)$ is a functional to denote probability of events. The same notation is used when speaking of probability in a frequentist framework (e.g. probability is a frequency of events observed in infinite replications of the universe, or counterfactual probability) as in a Bayesian framework (e.g. probab... | Upper case (P) or lower case (p) to denote p-values and probabilities in frequentist and Bayesian st | Notationally: $P(\cdot)$ is a functional to denote probability of events. The same notation is used when speaking of probability in a frequentist framework (e.g. probability is a frequency of events o | Upper case (P) or lower case (p) to denote p-values and probabilities in frequentist and Bayesian statistics
Notationally: $P(\cdot)$ is a functional to denote probability of events. The same notation is used when speaking of probability in a frequentist framework (e.g. probability is a frequency of events observed in ... | Upper case (P) or lower case (p) to denote p-values and probabilities in frequentist and Bayesian st
Notationally: $P(\cdot)$ is a functional to denote probability of events. The same notation is used when speaking of probability in a frequentist framework (e.g. probability is a frequency of events o |
46,608 | Upper case (P) or lower case (p) to denote p-values and probabilities in frequentist and Bayesian statistics | You are free to use any notation you want $p(X=x)$, $P(X=x)$, $\mathrm{P}(X=x)$, $\Pr(X=x)$ etc. I never heard about any formal rules about it. I have an impression that $p(X)$ is more often used when authors want to talk about probability in general and use catchall term for things like probabilities of events $p(X=x)... | Upper case (P) or lower case (p) to denote p-values and probabilities in frequentist and Bayesian st | You are free to use any notation you want $p(X=x)$, $P(X=x)$, $\mathrm{P}(X=x)$, $\Pr(X=x)$ etc. I never heard about any formal rules about it. I have an impression that $p(X)$ is more often used when | Upper case (P) or lower case (p) to denote p-values and probabilities in frequentist and Bayesian statistics
You are free to use any notation you want $p(X=x)$, $P(X=x)$, $\mathrm{P}(X=x)$, $\Pr(X=x)$ etc. I never heard about any formal rules about it. I have an impression that $p(X)$ is more often used when authors wa... | Upper case (P) or lower case (p) to denote p-values and probabilities in frequentist and Bayesian st
You are free to use any notation you want $p(X=x)$, $P(X=x)$, $\mathrm{P}(X=x)$, $\Pr(X=x)$ etc. I never heard about any formal rules about it. I have an impression that $p(X)$ is more often used when |
46,609 | Upper case (P) or lower case (p) to denote p-values and probabilities in frequentist and Bayesian statistics | I favour $P$ for P-value partly because that usage goes back a long way and I like history of ideas, but perhaps more because $p$ is already overloaded: I often want $p$ to mean
probability in general
a particular probability (e.g. in notation for binomial distributions)
the number of predictors or covariates.
Co... | Upper case (P) or lower case (p) to denote p-values and probabilities in frequentist and Bayesian st | I favour $P$ for P-value partly because that usage goes back a long way and I like history of ideas, but perhaps more because $p$ is already overloaded: I often want $p$ to mean
probability in gener | Upper case (P) or lower case (p) to denote p-values and probabilities in frequentist and Bayesian statistics
I favour $P$ for P-value partly because that usage goes back a long way and I like history of ideas, but perhaps more because $p$ is already overloaded: I often want $p$ to mean
probability in general
a parti... | Upper case (P) or lower case (p) to denote p-values and probabilities in frequentist and Bayesian st
I favour $P$ for P-value partly because that usage goes back a long way and I like history of ideas, but perhaps more because $p$ is already overloaded: I often want $p$ to mean
probability in gener |
46,610 | Support vector machine optimization question | The simplification in this problem is that the intercept term $\theta_0=0$. This condition allows to draw the decision boundary through the origin. The goal is then to maximize the signed length of the projection vectors, so as to minimize the norm of $\lVert\theta\rVert$, which is our optimization goal: $\underset{\th... | Support vector machine optimization question | The simplification in this problem is that the intercept term $\theta_0=0$. This condition allows to draw the decision boundary through the origin. The goal is then to maximize the signed length of th | Support vector machine optimization question
The simplification in this problem is that the intercept term $\theta_0=0$. This condition allows to draw the decision boundary through the origin. The goal is then to maximize the signed length of the projection vectors, so as to minimize the norm of $\lVert\theta\rVert$, w... | Support vector machine optimization question
The simplification in this problem is that the intercept term $\theta_0=0$. This condition allows to draw the decision boundary through the origin. The goal is then to maximize the signed length of th |
46,611 | Support vector machine optimization question | I suggest that you study Andrew Ng's lecture note on SVM here. The short answer is, first you need to solve $\alpha_i$ in dual optimization problem. Then, solve for $\theta$, or $w$ in the lecture note using the following eq:
$$w=\sum_{i=1}^{m}\alpha_iy^{(i)}x^{(i)}$$ | Support vector machine optimization question | I suggest that you study Andrew Ng's lecture note on SVM here. The short answer is, first you need to solve $\alpha_i$ in dual optimization problem. Then, solve for $\theta$, or $w$ in the lecture not | Support vector machine optimization question
I suggest that you study Andrew Ng's lecture note on SVM here. The short answer is, first you need to solve $\alpha_i$ in dual optimization problem. Then, solve for $\theta$, or $w$ in the lecture note using the following eq:
$$w=\sum_{i=1}^{m}\alpha_iy^{(i)}x^{(i)}$$ | Support vector machine optimization question
I suggest that you study Andrew Ng's lecture note on SVM here. The short answer is, first you need to solve $\alpha_i$ in dual optimization problem. Then, solve for $\theta$, or $w$ in the lecture not |
46,612 | Discrete white noise | Assuming that the $S(n)$ are also binary (taking on values in $\{0,1\}$) as are the $X(n)$, then I suspect that the $\epsilon(n)$ are also meant to be taking on values in $\{0,1\}$ and that $+$ in $X(n) = S(n) + \epsilon(n)$ is intended to be a modulo 2 sum or Exclusive-OR sum which might be better written as
$$X(n) = ... | Discrete white noise | Assuming that the $S(n)$ are also binary (taking on values in $\{0,1\}$) as are the $X(n)$, then I suspect that the $\epsilon(n)$ are also meant to be taking on values in $\{0,1\}$ and that $+$ in $X( | Discrete white noise
Assuming that the $S(n)$ are also binary (taking on values in $\{0,1\}$) as are the $X(n)$, then I suspect that the $\epsilon(n)$ are also meant to be taking on values in $\{0,1\}$ and that $+$ in $X(n) = S(n) + \epsilon(n)$ is intended to be a modulo 2 sum or Exclusive-OR sum which might be better... | Discrete white noise
Assuming that the $S(n)$ are also binary (taking on values in $\{0,1\}$) as are the $X(n)$, then I suspect that the $\epsilon(n)$ are also meant to be taking on values in $\{0,1\}$ and that $+$ in $X( |
46,613 | Discrete white noise | Discrete white noise definition is quite similar to continuous noise definition, meaning that it has mean zero (constant), and its variance is also constant (nonzero), and there's no autocorrelation:
$$E[\varepsilon(n)]=0$$
$$Var[\varepsilon(n)]=\sigma^2$$
$$E[\varepsilon(n)\varepsilon(n-k)]=0,\space k>0 $$
Example: $\... | Discrete white noise | Discrete white noise definition is quite similar to continuous noise definition, meaning that it has mean zero (constant), and its variance is also constant (nonzero), and there's no autocorrelation:
| Discrete white noise
Discrete white noise definition is quite similar to continuous noise definition, meaning that it has mean zero (constant), and its variance is also constant (nonzero), and there's no autocorrelation:
$$E[\varepsilon(n)]=0$$
$$Var[\varepsilon(n)]=\sigma^2$$
$$E[\varepsilon(n)\varepsilon(n-k)]=0,\spa... | Discrete white noise
Discrete white noise definition is quite similar to continuous noise definition, meaning that it has mean zero (constant), and its variance is also constant (nonzero), and there's no autocorrelation:
|
46,614 | Using Non-numeric Features | In most cases, you find a way to turn the non-numeric feature in a numeric one, and then go from there.
The simplest solution is to generate a set of indicator variables. For example, if you have $n$ different schools, you might add a set of $n$ variables $S_1, S_2, \ldots S_n$ to each data point. To indicate that $i$... | Using Non-numeric Features | In most cases, you find a way to turn the non-numeric feature in a numeric one, and then go from there.
The simplest solution is to generate a set of indicator variables. For example, if you have $n$ | Using Non-numeric Features
In most cases, you find a way to turn the non-numeric feature in a numeric one, and then go from there.
The simplest solution is to generate a set of indicator variables. For example, if you have $n$ different schools, you might add a set of $n$ variables $S_1, S_2, \ldots S_n$ to each data ... | Using Non-numeric Features
In most cases, you find a way to turn the non-numeric feature in a numeric one, and then go from there.
The simplest solution is to generate a set of indicator variables. For example, if you have $n$ |
46,615 | Using Non-numeric Features | There could be many categorical variables that affect the sales price - for example, Ready_for_immediate_occupancy (Yes/No). I will not worry about your specific variable - closest high school - and just address using categorical variables in regression.
As @Matt Krause said, certainly it is not uncommon to use indi... | Using Non-numeric Features | There could be many categorical variables that affect the sales price - for example, Ready_for_immediate_occupancy (Yes/No). I will not worry about your specific variable - closest high school - and | Using Non-numeric Features
There could be many categorical variables that affect the sales price - for example, Ready_for_immediate_occupancy (Yes/No). I will not worry about your specific variable - closest high school - and just address using categorical variables in regression.
As @Matt Krause said, certainly it ... | Using Non-numeric Features
There could be many categorical variables that affect the sales price - for example, Ready_for_immediate_occupancy (Yes/No). I will not worry about your specific variable - closest high school - and |
46,616 | How can a Mann-Whitney U-Test return a p = 1.00 for unequal means? | How can a Mann-Whitney U-Test return a p = 1.00 for unequal means?
Because it's not a test for means.
I'm wanting to compare scores derived from a reaction time task between two groups with unequal sizes (G1 = 78; G2 = 23). However, when I run the U test it tells me there is no significant difference, U = 897.00, z =... | How can a Mann-Whitney U-Test return a p = 1.00 for unequal means? | How can a Mann-Whitney U-Test return a p = 1.00 for unequal means?
Because it's not a test for means.
I'm wanting to compare scores derived from a reaction time task between two groups with unequal | How can a Mann-Whitney U-Test return a p = 1.00 for unequal means?
How can a Mann-Whitney U-Test return a p = 1.00 for unequal means?
Because it's not a test for means.
I'm wanting to compare scores derived from a reaction time task between two groups with unequal sizes (G1 = 78; G2 = 23). However, when I run the U t... | How can a Mann-Whitney U-Test return a p = 1.00 for unequal means?
How can a Mann-Whitney U-Test return a p = 1.00 for unequal means?
Because it's not a test for means.
I'm wanting to compare scores derived from a reaction time task between two groups with unequal |
46,617 | How can a Mann-Whitney U-Test return a p = 1.00 for unequal means? | The Mann-Whitney U test is a non-parametric test meaning that it is, loosely put, counting up hits and misses for rankings with the point being that the number of outcomes is countable as opposed to real continuous like a $t$-statistic. With some simplification the number of outcomes is $\frac{n(n+1)}{2}$ or the number... | How can a Mann-Whitney U-Test return a p = 1.00 for unequal means? | The Mann-Whitney U test is a non-parametric test meaning that it is, loosely put, counting up hits and misses for rankings with the point being that the number of outcomes is countable as opposed to r | How can a Mann-Whitney U-Test return a p = 1.00 for unequal means?
The Mann-Whitney U test is a non-parametric test meaning that it is, loosely put, counting up hits and misses for rankings with the point being that the number of outcomes is countable as opposed to real continuous like a $t$-statistic. With some simpli... | How can a Mann-Whitney U-Test return a p = 1.00 for unequal means?
The Mann-Whitney U test is a non-parametric test meaning that it is, loosely put, counting up hits and misses for rankings with the point being that the number of outcomes is countable as opposed to r |
46,618 | How choose a proper ARIMA model looking at ACF and PACF? | The visuals suggest your data is probably hourly data which means that there may be daily effects depending upon the kind of data that it is. Daily effects often include day-of-the-week effects , weekly effects , holiday effects et al and of course possible outliers/level shifts/time trends. Why don't you post your dat... | How choose a proper ARIMA model looking at ACF and PACF? | The visuals suggest your data is probably hourly data which means that there may be daily effects depending upon the kind of data that it is. Daily effects often include day-of-the-week effects , week | How choose a proper ARIMA model looking at ACF and PACF?
The visuals suggest your data is probably hourly data which means that there may be daily effects depending upon the kind of data that it is. Daily effects often include day-of-the-week effects , weekly effects , holiday effects et al and of course possible outli... | How choose a proper ARIMA model looking at ACF and PACF?
The visuals suggest your data is probably hourly data which means that there may be daily effects depending upon the kind of data that it is. Daily effects often include day-of-the-week effects , week |
46,619 | How choose a proper ARIMA model looking at ACF and PACF? | Since the ACF of the series gets closer to zero with increasing lags, the order of integration must be 0, that is you have ARIMA(p,0,q). (here I might be a little cautious since you might have the order of integration, $d$, such that $-\frac12<d<0$, in case you are interested in long-range dependence filters).
On the o... | How choose a proper ARIMA model looking at ACF and PACF? | Since the ACF of the series gets closer to zero with increasing lags, the order of integration must be 0, that is you have ARIMA(p,0,q). (here I might be a little cautious since you might have the ord | How choose a proper ARIMA model looking at ACF and PACF?
Since the ACF of the series gets closer to zero with increasing lags, the order of integration must be 0, that is you have ARIMA(p,0,q). (here I might be a little cautious since you might have the order of integration, $d$, such that $-\frac12<d<0$, in case you a... | How choose a proper ARIMA model looking at ACF and PACF?
Since the ACF of the series gets closer to zero with increasing lags, the order of integration must be 0, that is you have ARIMA(p,0,q). (here I might be a little cautious since you might have the ord |
46,620 | Decision Trees: if always binary split | CHAID Trees can have multiple nodes (more than 2), so decision trees are not always binary.
There are many different tree building algorithms and the Random Forest algorithm actually creates an ensemble of decision trees. In the original paper, the authors used a slight variation of the CART algorithm. However its not ... | Decision Trees: if always binary split | CHAID Trees can have multiple nodes (more than 2), so decision trees are not always binary.
There are many different tree building algorithms and the Random Forest algorithm actually creates an ensemb | Decision Trees: if always binary split
CHAID Trees can have multiple nodes (more than 2), so decision trees are not always binary.
There are many different tree building algorithms and the Random Forest algorithm actually creates an ensemble of decision trees. In the original paper, the authors used a slight variation ... | Decision Trees: if always binary split
CHAID Trees can have multiple nodes (more than 2), so decision trees are not always binary.
There are many different tree building algorithms and the Random Forest algorithm actually creates an ensemb |
46,621 | Statistical test to determine if a relationship is linear? | Any rank test will only test for monotonicity, and a highly nonlinear relationship can certainly be monotone. So any rank-based test won't be helpful.
I would recommend that you fit a linear and a nonlinear model and assess whether the nonlinear model explains a significantly larger amount of variance via ANOVA. Here i... | Statistical test to determine if a relationship is linear? | Any rank test will only test for monotonicity, and a highly nonlinear relationship can certainly be monotone. So any rank-based test won't be helpful.
I would recommend that you fit a linear and a non | Statistical test to determine if a relationship is linear?
Any rank test will only test for monotonicity, and a highly nonlinear relationship can certainly be monotone. So any rank-based test won't be helpful.
I would recommend that you fit a linear and a nonlinear model and assess whether the nonlinear model explains ... | Statistical test to determine if a relationship is linear?
Any rank test will only test for monotonicity, and a highly nonlinear relationship can certainly be monotone. So any rank-based test won't be helpful.
I would recommend that you fit a linear and a non |
46,622 | Statistical test to determine if a relationship is linear? | I like a lot the ANOVA-based answer by Stephan Kolassa. However, I would also like to offer a slightly different perspective.
First of all, consider the reason why you're testing for nonlinearity. If you want to test the assumptions of ordinary least squares to estimate the simple linear regression model, note that if ... | Statistical test to determine if a relationship is linear? | I like a lot the ANOVA-based answer by Stephan Kolassa. However, I would also like to offer a slightly different perspective.
First of all, consider the reason why you're testing for nonlinearity. If | Statistical test to determine if a relationship is linear?
I like a lot the ANOVA-based answer by Stephan Kolassa. However, I would also like to offer a slightly different perspective.
First of all, consider the reason why you're testing for nonlinearity. If you want to test the assumptions of ordinary least squares to... | Statistical test to determine if a relationship is linear?
I like a lot the ANOVA-based answer by Stephan Kolassa. However, I would also like to offer a slightly different perspective.
First of all, consider the reason why you're testing for nonlinearity. If |
46,623 | Statistical test to determine if a relationship is linear? | Along the lines of what @Kolassa said, I would suggest fitting a non-parametric model (something like a cubic spline smoother) to your data, and see if the improvement in the fit is significant.
If I remember well, the book "Generalized Additive Models", by Hastie and Tibshirani, contains a description of test that ca... | Statistical test to determine if a relationship is linear? | Along the lines of what @Kolassa said, I would suggest fitting a non-parametric model (something like a cubic spline smoother) to your data, and see if the improvement in the fit is significant.
If I | Statistical test to determine if a relationship is linear?
Along the lines of what @Kolassa said, I would suggest fitting a non-parametric model (something like a cubic spline smoother) to your data, and see if the improvement in the fit is significant.
If I remember well, the book "Generalized Additive Models", by Ha... | Statistical test to determine if a relationship is linear?
Along the lines of what @Kolassa said, I would suggest fitting a non-parametric model (something like a cubic spline smoother) to your data, and see if the improvement in the fit is significant.
If I |
46,624 | Statistical test to determine if a relationship is linear? | This is way more basic than the other suggestions, but you could plot ln(Y) against ln(X). The slope of the best-fit line here would be the order of the relationship between X and Y (for instance if the slope of ln(Y) vs ln(X) is a, then the relationship between X and Y is Y = CX^a + D, where C and D are constants.)
To... | Statistical test to determine if a relationship is linear? | This is way more basic than the other suggestions, but you could plot ln(Y) against ln(X). The slope of the best-fit line here would be the order of the relationship between X and Y (for instance if t | Statistical test to determine if a relationship is linear?
This is way more basic than the other suggestions, but you could plot ln(Y) against ln(X). The slope of the best-fit line here would be the order of the relationship between X and Y (for instance if the slope of ln(Y) vs ln(X) is a, then the relationship betwee... | Statistical test to determine if a relationship is linear?
This is way more basic than the other suggestions, but you could plot ln(Y) against ln(X). The slope of the best-fit line here would be the order of the relationship between X and Y (for instance if t |
46,625 | How can one compute Lilliefors' test for arbitrary distributions? | [At present this only deals with the initial question regarding limitations. I may come back to address some of the other questions.]
The Kolmogorov-Smirnov test (i.e. one with a fully specified continuous distribution) is itself distribution-free -- the distribution of the test statistic doesn't depend on what that sp... | How can one compute Lilliefors' test for arbitrary distributions? | [At present this only deals with the initial question regarding limitations. I may come back to address some of the other questions.]
The Kolmogorov-Smirnov test (i.e. one with a fully specified conti | How can one compute Lilliefors' test for arbitrary distributions?
[At present this only deals with the initial question regarding limitations. I may come back to address some of the other questions.]
The Kolmogorov-Smirnov test (i.e. one with a fully specified continuous distribution) is itself distribution-free -- the... | How can one compute Lilliefors' test for arbitrary distributions?
[At present this only deals with the initial question regarding limitations. I may come back to address some of the other questions.]
The Kolmogorov-Smirnov test (i.e. one with a fully specified conti |
46,626 | How to derive the MLE of a Gaussian mixture distribution | This is the proper start but I wonder at the wording of the exercise. I would have asked the following:
Write the likelihood of the sample $(x_1,\ldots,x_n)$ when the $X_i$'s are iid from$$p(x)= \mathbb{P}(K=1) N(x|\mu_1,\sigma^2_1) +
\mathbb{P}(K=0) N(x|\mu_0,\sigma^2_0)\qquad\qquad(1)$$and conclude at the lack of ... | How to derive the MLE of a Gaussian mixture distribution | This is the proper start but I wonder at the wording of the exercise. I would have asked the following:
Write the likelihood of the sample $(x_1,\ldots,x_n)$ when the $X_i$'s are iid from$$p(x)= \ma | How to derive the MLE of a Gaussian mixture distribution
This is the proper start but I wonder at the wording of the exercise. I would have asked the following:
Write the likelihood of the sample $(x_1,\ldots,x_n)$ when the $X_i$'s are iid from$$p(x)= \mathbb{P}(K=1) N(x|\mu_1,\sigma^2_1) +
\mathbb{P}(K=0) N(x|\mu_0... | How to derive the MLE of a Gaussian mixture distribution
This is the proper start but I wonder at the wording of the exercise. I would have asked the following:
Write the likelihood of the sample $(x_1,\ldots,x_n)$ when the $X_i$'s are iid from$$p(x)= \ma |
46,627 | How to derive the MLE of a Gaussian mixture distribution | I continued working on this exercise and came up with a solution. I'd be glad about comments.
Let $\theta=[\pi_0,\pi_1,\mu_0,\mu_1,\sigma_0^2,\sigma_1^2]$
The likelihood over N observations is given by:
$$ P(x|\theta) = \prod_{i=1}^n \bigg[\pi_0 N(x_i|\mu_0,\sigma_0^2)+\pi_1 N(x_i|\mu_1,\sigma_1^2) \bigg]$$
The likel... | How to derive the MLE of a Gaussian mixture distribution | I continued working on this exercise and came up with a solution. I'd be glad about comments.
Let $\theta=[\pi_0,\pi_1,\mu_0,\mu_1,\sigma_0^2,\sigma_1^2]$
The likelihood over N observations is given | How to derive the MLE of a Gaussian mixture distribution
I continued working on this exercise and came up with a solution. I'd be glad about comments.
Let $\theta=[\pi_0,\pi_1,\mu_0,\mu_1,\sigma_0^2,\sigma_1^2]$
The likelihood over N observations is given by:
$$ P(x|\theta) = \prod_{i=1}^n \bigg[\pi_0 N(x_i|\mu_0,\si... | How to derive the MLE of a Gaussian mixture distribution
I continued working on this exercise and came up with a solution. I'd be glad about comments.
Let $\theta=[\pi_0,\pi_1,\mu_0,\mu_1,\sigma_0^2,\sigma_1^2]$
The likelihood over N observations is given |
46,628 | How to detect nonlinear relationship? | Furthermore, both Pearson correlation coefficient and Spearman's rank
correlation coefficient were calculated and they were 0.624 and 0.619
respectively. Does this indicate a linear relationship?
No, not necessarily. You can build datasets which have 0.6, but dependence is strongly non-linear, or nearly linear/com... | How to detect nonlinear relationship? | Furthermore, both Pearson correlation coefficient and Spearman's rank
correlation coefficient were calculated and they were 0.624 and 0.619
respectively. Does this indicate a linear relationship?
| How to detect nonlinear relationship?
Furthermore, both Pearson correlation coefficient and Spearman's rank
correlation coefficient were calculated and they were 0.624 and 0.619
respectively. Does this indicate a linear relationship?
No, not necessarily. You can build datasets which have 0.6, but dependence is str... | How to detect nonlinear relationship?
Furthermore, both Pearson correlation coefficient and Spearman's rank
correlation coefficient were calculated and they were 0.624 and 0.619
respectively. Does this indicate a linear relationship?
|
46,629 | How to interpret the bandwidth value in a kernel density estimation? | For simplicity, let's assume that we are talking about some really simple kernel, say triangular kernel:
$$ K(x) =
\begin{cases}
1 - |x| & \text{if } x \in [-1, 1] \\
0 & \text{otherwise}
\end{cases}
$$
Recall that in kernel density estimation for estimating density $\hat f_h$ we combine $n$ kernels parametrized by $h$... | How to interpret the bandwidth value in a kernel density estimation? | For simplicity, let's assume that we are talking about some really simple kernel, say triangular kernel:
$$ K(x) =
\begin{cases}
1 - |x| & \text{if } x \in [-1, 1] \\
0 & \text{otherwise}
\end{cases}
| How to interpret the bandwidth value in a kernel density estimation?
For simplicity, let's assume that we are talking about some really simple kernel, say triangular kernel:
$$ K(x) =
\begin{cases}
1 - |x| & \text{if } x \in [-1, 1] \\
0 & \text{otherwise}
\end{cases}
$$
Recall that in kernel density estimation for est... | How to interpret the bandwidth value in a kernel density estimation?
For simplicity, let's assume that we are talking about some really simple kernel, say triangular kernel:
$$ K(x) =
\begin{cases}
1 - |x| & \text{if } x \in [-1, 1] \\
0 & \text{otherwise}
\end{cases}
|
46,630 | Can I use a paired t-test on data that are averages? | It's probably not a problem at all, provided the sample sizes are similar to each other.
There could be complications with small sample sizes, though. Intuitively, an average of a small sample is more variable than averages of large samples. If some pairs are both based on small samples, they could create unusual out... | Can I use a paired t-test on data that are averages? | It's probably not a problem at all, provided the sample sizes are similar to each other.
There could be complications with small sample sizes, though. Intuitively, an average of a small sample is mor | Can I use a paired t-test on data that are averages?
It's probably not a problem at all, provided the sample sizes are similar to each other.
There could be complications with small sample sizes, though. Intuitively, an average of a small sample is more variable than averages of large samples. If some pairs are both ... | Can I use a paired t-test on data that are averages?
It's probably not a problem at all, provided the sample sizes are similar to each other.
There could be complications with small sample sizes, though. Intuitively, an average of a small sample is mor |
46,631 | Can I use a paired t-test on data that are averages? | In principle it's fine, but you are not using the full power of your data though, since you use only the 15 averages and not the rest of the data. You are testing the null hypothesis of whether $<A_i - B_i>_i = 0$ (with $i$ from 1 to 15), where $A_i$ and $B_i$ are random variables. If you compute not only their means b... | Can I use a paired t-test on data that are averages? | In principle it's fine, but you are not using the full power of your data though, since you use only the 15 averages and not the rest of the data. You are testing the null hypothesis of whether $<A_i | Can I use a paired t-test on data that are averages?
In principle it's fine, but you are not using the full power of your data though, since you use only the 15 averages and not the rest of the data. You are testing the null hypothesis of whether $<A_i - B_i>_i = 0$ (with $i$ from 1 to 15), where $A_i$ and $B_i$ are ra... | Can I use a paired t-test on data that are averages?
In principle it's fine, but you are not using the full power of your data though, since you use only the 15 averages and not the rest of the data. You are testing the null hypothesis of whether $<A_i |
46,632 | Should I get 100% classification accuracy on training data? | No, your data may not be perfectly classifiable especially by a linear classifier and this is not always because of the classifier or the features you are using. None of the features may contain sufficent differences to provide a clear line.
You may try non-linear models which can provide better classification as well ... | Should I get 100% classification accuracy on training data? | No, your data may not be perfectly classifiable especially by a linear classifier and this is not always because of the classifier or the features you are using. None of the features may contain suffi | Should I get 100% classification accuracy on training data?
No, your data may not be perfectly classifiable especially by a linear classifier and this is not always because of the classifier or the features you are using. None of the features may contain sufficent differences to provide a clear line.
You may try non-li... | Should I get 100% classification accuracy on training data?
No, your data may not be perfectly classifiable especially by a linear classifier and this is not always because of the classifier or the features you are using. None of the features may contain suffi |
46,633 | Should I get 100% classification accuracy on training data? | No, it's not always possible to create a linear boundary in the predictor space between all "1"s and "0"s in the data set (which is what would be required to have perfect linear classifier).
E.g., what if you had a single predictor and the training data were $y = (0,0,1,1)$, $x = (1,3,2,4)$. You can imagine similar sce... | Should I get 100% classification accuracy on training data? | No, it's not always possible to create a linear boundary in the predictor space between all "1"s and "0"s in the data set (which is what would be required to have perfect linear classifier).
E.g., wha | Should I get 100% classification accuracy on training data?
No, it's not always possible to create a linear boundary in the predictor space between all "1"s and "0"s in the data set (which is what would be required to have perfect linear classifier).
E.g., what if you had a single predictor and the training data were $... | Should I get 100% classification accuracy on training data?
No, it's not always possible to create a linear boundary in the predictor space between all "1"s and "0"s in the data set (which is what would be required to have perfect linear classifier).
E.g., wha |
46,634 | Should I get 100% classification accuracy on training data? | No, every dataset is not linearly separable, as previous answers stated it.
Unless... You have more predictors than observations (or more columns than rows).
Therefore you should make sure that the feature extraction pipeline you use does not produce more features than your number of observations. | Should I get 100% classification accuracy on training data? | No, every dataset is not linearly separable, as previous answers stated it.
Unless... You have more predictors than observations (or more columns than rows).
Therefore you should make sure that the f | Should I get 100% classification accuracy on training data?
No, every dataset is not linearly separable, as previous answers stated it.
Unless... You have more predictors than observations (or more columns than rows).
Therefore you should make sure that the feature extraction pipeline you use does not produce more fea... | Should I get 100% classification accuracy on training data?
No, every dataset is not linearly separable, as previous answers stated it.
Unless... You have more predictors than observations (or more columns than rows).
Therefore you should make sure that the f |
46,635 | Should I get 100% classification accuracy on training data? | Imagine that it's truly a random data set. Let's say you're trying to classify the data into sick and healthy, and it just happens so that the incidence of sickness is truly random, at least independent of any of your predictors. In this case you shouldn't be getting good accuracy metrics without overfitting | Should I get 100% classification accuracy on training data? | Imagine that it's truly a random data set. Let's say you're trying to classify the data into sick and healthy, and it just happens so that the incidence of sickness is truly random, at least independe | Should I get 100% classification accuracy on training data?
Imagine that it's truly a random data set. Let's say you're trying to classify the data into sick and healthy, and it just happens so that the incidence of sickness is truly random, at least independent of any of your predictors. In this case you shouldn't be ... | Should I get 100% classification accuracy on training data?
Imagine that it's truly a random data set. Let's say you're trying to classify the data into sick and healthy, and it just happens so that the incidence of sickness is truly random, at least independe |
46,636 | Projecting data on a sphere | It might be possible to solve your example problem using a procedure similar to nonclassical metric MDS (using the stress criterion). Initialize the 'projected points' to lie on a sphere (more on this later). Then, use an optimization solver to find the projected points that minimize the objective function. There are a... | Projecting data on a sphere | It might be possible to solve your example problem using a procedure similar to nonclassical metric MDS (using the stress criterion). Initialize the 'projected points' to lie on a sphere (more on this | Projecting data on a sphere
It might be possible to solve your example problem using a procedure similar to nonclassical metric MDS (using the stress criterion). Initialize the 'projected points' to lie on a sphere (more on this later). Then, use an optimization solver to find the projected points that minimize the obj... | Projecting data on a sphere
It might be possible to solve your example problem using a procedure similar to nonclassical metric MDS (using the stress criterion). Initialize the 'projected points' to lie on a sphere (more on this |
46,637 | Projecting data on a sphere | Maybe something like that on R ?
http://planspace.org/2013/02/03/pca-3d-visualization-and-clustering-in-r/
The PCA project on a plane if you only take the first two axes. If you take the 3rd one you have the projection on sphere. | Projecting data on a sphere | Maybe something like that on R ?
http://planspace.org/2013/02/03/pca-3d-visualization-and-clustering-in-r/
The PCA project on a plane if you only take the first two axes. If you take the 3rd one you h | Projecting data on a sphere
Maybe something like that on R ?
http://planspace.org/2013/02/03/pca-3d-visualization-and-clustering-in-r/
The PCA project on a plane if you only take the first two axes. If you take the 3rd one you have the projection on sphere. | Projecting data on a sphere
Maybe something like that on R ?
http://planspace.org/2013/02/03/pca-3d-visualization-and-clustering-in-r/
The PCA project on a plane if you only take the first two axes. If you take the 3rd one you h |
46,638 | Projecting data on a sphere | Relational Perspective Map (http://www.visumap.net/index.aspx?p=Resources/RpmOverview) might be something interesting for you. RPM has been originally purposed for torus surface; then extended to other low dimensional manifolds like 3D sphere and projective plane. A key to design MDS on closed manifold is that the "dis... | Projecting data on a sphere | Relational Perspective Map (http://www.visumap.net/index.aspx?p=Resources/RpmOverview) might be something interesting for you. RPM has been originally purposed for torus surface; then extended to othe | Projecting data on a sphere
Relational Perspective Map (http://www.visumap.net/index.aspx?p=Resources/RpmOverview) might be something interesting for you. RPM has been originally purposed for torus surface; then extended to other low dimensional manifolds like 3D sphere and projective plane. A key to design MDS on clos... | Projecting data on a sphere
Relational Perspective Map (http://www.visumap.net/index.aspx?p=Resources/RpmOverview) might be something interesting for you. RPM has been originally purposed for torus surface; then extended to othe |
46,639 | Why do we use the Unregularized Cost to plot a Learning Curve? | Background: I believe you are referring to this lecture dealing with Regularization and Bias/Variance in the context of polynomial regression.
The algorithm fmincg produces optimized estimated $\hat \theta$ coefficients (or parameters), based on a gradient descent computation derived from the objective function:
$$J(\t... | Why do we use the Unregularized Cost to plot a Learning Curve? | Background: I believe you are referring to this lecture dealing with Regularization and Bias/Variance in the context of polynomial regression.
The algorithm fmincg produces optimized estimated $\hat \ | Why do we use the Unregularized Cost to plot a Learning Curve?
Background: I believe you are referring to this lecture dealing with Regularization and Bias/Variance in the context of polynomial regression.
The algorithm fmincg produces optimized estimated $\hat \theta$ coefficients (or parameters), based on a gradient ... | Why do we use the Unregularized Cost to plot a Learning Curve?
Background: I believe you are referring to this lecture dealing with Regularization and Bias/Variance in the context of polynomial regression.
The algorithm fmincg produces optimized estimated $\hat \ |
46,640 | Why do we use the Unregularized Cost to plot a Learning Curve? | Because if you want to know the actual cost, you need to look at the unregularized cost.
Consider LASSO regression, where we we tell a kind of mathematical white lie in the formula by adding a term that reflects the sum of the parameter values. Adding this term influences the parameter estimates. But at the end of the ... | Why do we use the Unregularized Cost to plot a Learning Curve? | Because if you want to know the actual cost, you need to look at the unregularized cost.
Consider LASSO regression, where we we tell a kind of mathematical white lie in the formula by adding a term th | Why do we use the Unregularized Cost to plot a Learning Curve?
Because if you want to know the actual cost, you need to look at the unregularized cost.
Consider LASSO regression, where we we tell a kind of mathematical white lie in the formula by adding a term that reflects the sum of the parameter values. Adding this ... | Why do we use the Unregularized Cost to plot a Learning Curve?
Because if you want to know the actual cost, you need to look at the unregularized cost.
Consider LASSO regression, where we we tell a kind of mathematical white lie in the formula by adding a term th |
46,641 | Is Simpson's Paradox always an example of confounding? | Here's a simple visual example of Simpson's Paradox where there is no confounding:
Observing the relationship between the two variables Sex and Medical Cost, there would appear to be a strong causal relationship:
However if you add a third variable, Age, in the causal diagram:
it becomes clear that the relationship ... | Is Simpson's Paradox always an example of confounding? | Here's a simple visual example of Simpson's Paradox where there is no confounding:
Observing the relationship between the two variables Sex and Medical Cost, there would appear to be a strong causal | Is Simpson's Paradox always an example of confounding?
Here's a simple visual example of Simpson's Paradox where there is no confounding:
Observing the relationship between the two variables Sex and Medical Cost, there would appear to be a strong causal relationship:
However if you add a third variable, Age, in the c... | Is Simpson's Paradox always an example of confounding?
Here's a simple visual example of Simpson's Paradox where there is no confounding:
Observing the relationship between the two variables Sex and Medical Cost, there would appear to be a strong causal |
46,642 | Is Simpson's Paradox always an example of confounding? | You could imagine forming subclasses based on X, and the relationship between X and Y within each subclass opposes the relationship between X and Y across the sample. You could conceive of the subclasses as a confounder, but if you've artificially imposed them and they come from nothing but the already measured X varia... | Is Simpson's Paradox always an example of confounding? | You could imagine forming subclasses based on X, and the relationship between X and Y within each subclass opposes the relationship between X and Y across the sample. You could conceive of the subclas | Is Simpson's Paradox always an example of confounding?
You could imagine forming subclasses based on X, and the relationship between X and Y within each subclass opposes the relationship between X and Y across the sample. You could conceive of the subclasses as a confounder, but if you've artificially imposed them and ... | Is Simpson's Paradox always an example of confounding?
You could imagine forming subclasses based on X, and the relationship between X and Y within each subclass opposes the relationship between X and Y across the sample. You could conceive of the subclas |
46,643 | Is Simpson's Paradox always an example of confounding? | No, Simpson's paradox is not always about confounding. In fact, I would say there is no reason to be surprised by sign reversals if you already know the covariate you adjust for is a confounder, you should check this answer here. You can have sign reversal adjusting for colliders or mediators, and without causal knowle... | Is Simpson's Paradox always an example of confounding? | No, Simpson's paradox is not always about confounding. In fact, I would say there is no reason to be surprised by sign reversals if you already know the covariate you adjust for is a confounder, you s | Is Simpson's Paradox always an example of confounding?
No, Simpson's paradox is not always about confounding. In fact, I would say there is no reason to be surprised by sign reversals if you already know the covariate you adjust for is a confounder, you should check this answer here. You can have sign reversal adjustin... | Is Simpson's Paradox always an example of confounding?
No, Simpson's paradox is not always about confounding. In fact, I would say there is no reason to be surprised by sign reversals if you already know the covariate you adjust for is a confounder, you s |
46,644 | Hosmer-Lemeshow test in R | Update 28 July: the below has now been pushed to CRAN in the generalhoslem package along with the Lipsitz and Pulkstenis-Robinson tests.
Fagerland and Hosmer discuss a generalisation of the Hosmer-Lemeshow test and two other approaches (the Lipsitz test and Pulkstenis-Robinson tests) in A goodness-of-fit test for the ... | Hosmer-Lemeshow test in R | Update 28 July: the below has now been pushed to CRAN in the generalhoslem package along with the Lipsitz and Pulkstenis-Robinson tests.
Fagerland and Hosmer discuss a generalisation of the Hosmer-Le | Hosmer-Lemeshow test in R
Update 28 July: the below has now been pushed to CRAN in the generalhoslem package along with the Lipsitz and Pulkstenis-Robinson tests.
Fagerland and Hosmer discuss a generalisation of the Hosmer-Lemeshow test and two other approaches (the Lipsitz test and Pulkstenis-Robinson tests) in A goo... | Hosmer-Lemeshow test in R
Update 28 July: the below has now been pushed to CRAN in the generalhoslem package along with the Lipsitz and Pulkstenis-Robinson tests.
Fagerland and Hosmer discuss a generalisation of the Hosmer-Le |
46,645 | How to find the sweet spot | By "sweet spot," I think we can assume you mean the inflection point -- the point where the growth in new users rolls over and begins to flatten out towards an asymptomtic max. There are no shortage of ways to analyze this information. One of them is as a diffusion process. Something that might help you visualize this ... | How to find the sweet spot | By "sweet spot," I think we can assume you mean the inflection point -- the point where the growth in new users rolls over and begins to flatten out towards an asymptomtic max. There are no shortage o | How to find the sweet spot
By "sweet spot," I think we can assume you mean the inflection point -- the point where the growth in new users rolls over and begins to flatten out towards an asymptomtic max. There are no shortage of ways to analyze this information. One of them is as a diffusion process. Something that mig... | How to find the sweet spot
By "sweet spot," I think we can assume you mean the inflection point -- the point where the growth in new users rolls over and begins to flatten out towards an asymptomtic max. There are no shortage o |
46,646 | How to find the sweet spot | What you are dealing with here is a regression problem with count data as the response variable. Before speculating on the existence of a "sweet spot" for the level of advertising, I recommend you just try to model the relationship between these variables. A negative binomial GLM would be a good place to start (see h... | How to find the sweet spot | What you are dealing with here is a regression problem with count data as the response variable. Before speculating on the existence of a "sweet spot" for the level of advertising, I recommend you ju | How to find the sweet spot
What you are dealing with here is a regression problem with count data as the response variable. Before speculating on the existence of a "sweet spot" for the level of advertising, I recommend you just try to model the relationship between these variables. A negative binomial GLM would be a... | How to find the sweet spot
What you are dealing with here is a regression problem with count data as the response variable. Before speculating on the existence of a "sweet spot" for the level of advertising, I recommend you ju |
46,647 | How to find the sweet spot | You cannot deduct from the data, that such a point really exists. You have a theory in your head, that at some Point more trp is not going to gain more users but that is not in your data. You will need to formulate this believe as a mathematical modell, then fit your data to that modell and then you can ask your questi... | How to find the sweet spot | You cannot deduct from the data, that such a point really exists. You have a theory in your head, that at some Point more trp is not going to gain more users but that is not in your data. You will nee | How to find the sweet spot
You cannot deduct from the data, that such a point really exists. You have a theory in your head, that at some Point more trp is not going to gain more users but that is not in your data. You will need to formulate this believe as a mathematical modell, then fit your data to that modell and t... | How to find the sweet spot
You cannot deduct from the data, that such a point really exists. You have a theory in your head, that at some Point more trp is not going to gain more users but that is not in your data. You will nee |
46,648 | Similarity function with given properties | The function
$$ f\colon [0,1]\times[0,1]\to[0,1], \quad(x,y)\mapsto \frac{1}{4}x+\frac{1}{4}y+\frac{3}{4}(x-y)^2 $$
does what you want. Plus, it's positive, symmetric and definite ($x\neq y$ implies that $f(x,y)>0$).
Neither it nor its root is linearly homogeneous like a norm-derived distance function, though ($f(\lamb... | Similarity function with given properties | The function
$$ f\colon [0,1]\times[0,1]\to[0,1], \quad(x,y)\mapsto \frac{1}{4}x+\frac{1}{4}y+\frac{3}{4}(x-y)^2 $$
does what you want. Plus, it's positive, symmetric and definite ($x\neq y$ implies t | Similarity function with given properties
The function
$$ f\colon [0,1]\times[0,1]\to[0,1], \quad(x,y)\mapsto \frac{1}{4}x+\frac{1}{4}y+\frac{3}{4}(x-y)^2 $$
does what you want. Plus, it's positive, symmetric and definite ($x\neq y$ implies that $f(x,y)>0$).
Neither it nor its root is linearly homogeneous like a norm-d... | Similarity function with given properties
The function
$$ f\colon [0,1]\times[0,1]\to[0,1], \quad(x,y)\mapsto \frac{1}{4}x+\frac{1}{4}y+\frac{3}{4}(x-y)^2 $$
does what you want. Plus, it's positive, symmetric and definite ($x\neq y$ implies t |
46,649 | Meaning of Borel sets in discrete spaces | If $\Omega$ is countable, then we may without loss of generality label the outcomes by the integers and set $\Omega = \{1, 2, \dots\}$. This follows from the definition of countability.
That is, even if we are interested in an experiment where we pick balls from an urn, we can label the outcomes in the sample space by ... | Meaning of Borel sets in discrete spaces | If $\Omega$ is countable, then we may without loss of generality label the outcomes by the integers and set $\Omega = \{1, 2, \dots\}$. This follows from the definition of countability.
That is, even | Meaning of Borel sets in discrete spaces
If $\Omega$ is countable, then we may without loss of generality label the outcomes by the integers and set $\Omega = \{1, 2, \dots\}$. This follows from the definition of countability.
That is, even if we are interested in an experiment where we pick balls from an urn, we can l... | Meaning of Borel sets in discrete spaces
If $\Omega$ is countable, then we may without loss of generality label the outcomes by the integers and set $\Omega = \{1, 2, \dots\}$. This follows from the definition of countability.
That is, even |
46,650 | Strange outcomes in binary logistic regression in SPSS | mdewey already gave a good answer. However, given that SPSS did give you parameter estimates, I suspect you don't have full separation, but more probably multicollinearity, also known simply as "collinearity" - some of your predictors carry almost the same information, which commonly leads to large parameter estimates ... | Strange outcomes in binary logistic regression in SPSS | mdewey already gave a good answer. However, given that SPSS did give you parameter estimates, I suspect you don't have full separation, but more probably multicollinearity, also known simply as "colli | Strange outcomes in binary logistic regression in SPSS
mdewey already gave a good answer. However, given that SPSS did give you parameter estimates, I suspect you don't have full separation, but more probably multicollinearity, also known simply as "collinearity" - some of your predictors carry almost the same informat... | Strange outcomes in binary logistic regression in SPSS
mdewey already gave a good answer. However, given that SPSS did give you parameter estimates, I suspect you don't have full separation, but more probably multicollinearity, also known simply as "colli |
46,651 | Strange outcomes in binary logistic regression in SPSS | You almost certainly have separation here. If you tabulate the outcome by your suspect predictors you will find that (a) if the predictor is binary there is only one level of your outcome for one level of the predictor (b) if you predictor is continuous then for a range of values above (below) a cut-off you only have o... | Strange outcomes in binary logistic regression in SPSS | You almost certainly have separation here. If you tabulate the outcome by your suspect predictors you will find that (a) if the predictor is binary there is only one level of your outcome for one leve | Strange outcomes in binary logistic regression in SPSS
You almost certainly have separation here. If you tabulate the outcome by your suspect predictors you will find that (a) if the predictor is binary there is only one level of your outcome for one level of the predictor (b) if you predictor is continuous then for a ... | Strange outcomes in binary logistic regression in SPSS
You almost certainly have separation here. If you tabulate the outcome by your suspect predictors you will find that (a) if the predictor is binary there is only one level of your outcome for one leve |
46,652 | Strange outcomes in binary logistic regression in SPSS | One other postscript: logistic regression does have problems when there is separation. Two alternatives would be to use penalized logistic, which is available as the STATS FIRTHLOG extension command or to use DISCRIMINANT, which works even when there is separation. | Strange outcomes in binary logistic regression in SPSS | One other postscript: logistic regression does have problems when there is separation. Two alternatives would be to use penalized logistic, which is available as the STATS FIRTHLOG extension command | Strange outcomes in binary logistic regression in SPSS
One other postscript: logistic regression does have problems when there is separation. Two alternatives would be to use penalized logistic, which is available as the STATS FIRTHLOG extension command or to use DISCRIMINANT, which works even when there is separation... | Strange outcomes in binary logistic regression in SPSS
One other postscript: logistic regression does have problems when there is separation. Two alternatives would be to use penalized logistic, which is available as the STATS FIRTHLOG extension command |
46,653 | Relationship between Gaussian process and Regression by supervised learning model like SVR | There exists a very strong link between Gaussian process regression and kernel Ridge regression (also called Tikhonov regularization). Indeed, the posterior expectation you compute using Bayesian inference with prior $\mathcal{GP}(0,k)$ and additive noise model $\mathcal{N}(0,\eta^2)$ gives exactly the same predictions... | Relationship between Gaussian process and Regression by supervised learning model like SVR | There exists a very strong link between Gaussian process regression and kernel Ridge regression (also called Tikhonov regularization). Indeed, the posterior expectation you compute using Bayesian infe | Relationship between Gaussian process and Regression by supervised learning model like SVR
There exists a very strong link between Gaussian process regression and kernel Ridge regression (also called Tikhonov regularization). Indeed, the posterior expectation you compute using Bayesian inference with prior $\mathcal{GP... | Relationship between Gaussian process and Regression by supervised learning model like SVR
There exists a very strong link between Gaussian process regression and kernel Ridge regression (also called Tikhonov regularization). Indeed, the posterior expectation you compute using Bayesian infe |
46,654 | Mode of the $ \chi^2 $ distribution | The pdf of a $\chi^2_k$ distribution is,
$$f(x) = 2^{-k/2} \Gamma{(k/2)}^{-1} x^{k/2 - 1}e^{-x/2}. $$
We need to find $x^*$ such that $x^* = \arg \max_\limits{x > 0} f(x)$. Then $x^*$ is the mode. Note that $\arg \max_\limits{x > 0} f(x) = \arg \max_\limits{x > 0} \log f(x)$, so we will find the mode by maximizing the... | Mode of the $ \chi^2 $ distribution | The pdf of a $\chi^2_k$ distribution is,
$$f(x) = 2^{-k/2} \Gamma{(k/2)}^{-1} x^{k/2 - 1}e^{-x/2}. $$
We need to find $x^*$ such that $x^* = \arg \max_\limits{x > 0} f(x)$. Then $x^*$ is the mode. Not | Mode of the $ \chi^2 $ distribution
The pdf of a $\chi^2_k$ distribution is,
$$f(x) = 2^{-k/2} \Gamma{(k/2)}^{-1} x^{k/2 - 1}e^{-x/2}. $$
We need to find $x^*$ such that $x^* = \arg \max_\limits{x > 0} f(x)$. Then $x^*$ is the mode. Note that $\arg \max_\limits{x > 0} f(x) = \arg \max_\limits{x > 0} \log f(x)$, so we ... | Mode of the $ \chi^2 $ distribution
The pdf of a $\chi^2_k$ distribution is,
$$f(x) = 2^{-k/2} \Gamma{(k/2)}^{-1} x^{k/2 - 1}e^{-x/2}. $$
We need to find $x^*$ such that $x^* = \arg \max_\limits{x > 0} f(x)$. Then $x^*$ is the mode. Not |
46,655 | Mode of the $ \chi^2 $ distribution | The solution by Greenparker is correct and the double derivative can be proved to be less than 0. Substitute x=k-2 -> k=x+2 in the 2nd derivative solution obtained (i.e.
-(((x+2)/2)-1)*x^-2)
Thus we get, (-x/2)*(x^-2) which is negative. hence double derivative is negative. | Mode of the $ \chi^2 $ distribution | The solution by Greenparker is correct and the double derivative can be proved to be less than 0. Substitute x=k-2 -> k=x+2 in the 2nd derivative solution obtained (i.e.
-(((x+2)/2)-1)*x^-2)
| Mode of the $ \chi^2 $ distribution
The solution by Greenparker is correct and the double derivative can be proved to be less than 0. Substitute x=k-2 -> k=x+2 in the 2nd derivative solution obtained (i.e.
-(((x+2)/2)-1)*x^-2)
Thus we get, (-x/2)*(x^-2) which is negative. hence double derivative is negative. | Mode of the $ \chi^2 $ distribution
The solution by Greenparker is correct and the double derivative can be proved to be less than 0. Substitute x=k-2 -> k=x+2 in the 2nd derivative solution obtained (i.e.
-(((x+2)/2)-1)*x^-2)
|
46,656 | Reference level in GLM regression | I would mostly choose as reference level one which gives meaning in the applied context, that is, a reference level that actually is interesting as a reference in the application. So, in an experiment with several treatments and one control, I would choose the control as the reference level, in a marketing context with... | Reference level in GLM regression | I would mostly choose as reference level one which gives meaning in the applied context, that is, a reference level that actually is interesting as a reference in the application. So, in an experiment | Reference level in GLM regression
I would mostly choose as reference level one which gives meaning in the applied context, that is, a reference level that actually is interesting as a reference in the application. So, in an experiment with several treatments and one control, I would choose the control as the reference ... | Reference level in GLM regression
I would mostly choose as reference level one which gives meaning in the applied context, that is, a reference level that actually is interesting as a reference in the application. So, in an experiment |
46,657 | Difference of Frechet variables | The statistical understanding of the parameters--$m$ is a location, $s$ is a scale, and $\alpha$ is a power transformation--tells us how to proceed.
Consider this generalization of the problem. Let $F$ be any distribution function. Let $\{t_\alpha\,|\, \alpha\in A\subset\mathbb{R}^p\}$ be a parameterized family of s... | Difference of Frechet variables | The statistical understanding of the parameters--$m$ is a location, $s$ is a scale, and $\alpha$ is a power transformation--tells us how to proceed.
Consider this generalization of the problem. Let | Difference of Frechet variables
The statistical understanding of the parameters--$m$ is a location, $s$ is a scale, and $\alpha$ is a power transformation--tells us how to proceed.
Consider this generalization of the problem. Let $F$ be any distribution function. Let $\{t_\alpha\,|\, \alpha\in A\subset\mathbb{R}^p\}... | Difference of Frechet variables
The statistical understanding of the parameters--$m$ is a location, $s$ is a scale, and $\alpha$ is a power transformation--tells us how to proceed.
Consider this generalization of the problem. Let |
46,658 | Difference of Frechet variables | I rename the variables $X_1=X$ and $X_2 = Y$ so the question is: What is the probability that $X_1>X_2$ given that $X_j$ is Frechet$(\alpha,s_j,m)$?
First note that this problem can be considered as the problem that
$$X_1 = \max\{X_1,X_2\},$$
this problem is well known in the theory of extreme values. First I note tha... | Difference of Frechet variables | I rename the variables $X_1=X$ and $X_2 = Y$ so the question is: What is the probability that $X_1>X_2$ given that $X_j$ is Frechet$(\alpha,s_j,m)$?
First note that this problem can be considered as | Difference of Frechet variables
I rename the variables $X_1=X$ and $X_2 = Y$ so the question is: What is the probability that $X_1>X_2$ given that $X_j$ is Frechet$(\alpha,s_j,m)$?
First note that this problem can be considered as the problem that
$$X_1 = \max\{X_1,X_2\},$$
this problem is well known in the theory of ... | Difference of Frechet variables
I rename the variables $X_1=X$ and $X_2 = Y$ so the question is: What is the probability that $X_1>X_2$ given that $X_j$ is Frechet$(\alpha,s_j,m)$?
First note that this problem can be considered as |
46,659 | Are K-Fold Cross Validation , Bootstrap ,Out of Bag fundamentally same? | Altough these 3 approaches consists in dividing a dataset into several subsets, they are still different in the main purpose of this division.
K-Fold Cross Validation (CV)
It consists in dividing the original set of observations into k subset of more or less same size. Then, you will use one of the subset as test set a... | Are K-Fold Cross Validation , Bootstrap ,Out of Bag fundamentally same? | Altough these 3 approaches consists in dividing a dataset into several subsets, they are still different in the main purpose of this division.
K-Fold Cross Validation (CV)
It consists in dividing the | Are K-Fold Cross Validation , Bootstrap ,Out of Bag fundamentally same?
Altough these 3 approaches consists in dividing a dataset into several subsets, they are still different in the main purpose of this division.
K-Fold Cross Validation (CV)
It consists in dividing the original set of observations into k subset of mo... | Are K-Fold Cross Validation , Bootstrap ,Out of Bag fundamentally same?
Altough these 3 approaches consists in dividing a dataset into several subsets, they are still different in the main purpose of this division.
K-Fold Cross Validation (CV)
It consists in dividing the |
46,660 | Are K-Fold Cross Validation , Bootstrap ,Out of Bag fundamentally same? | First of all, you're right about the similarities: they are all types of resampling-based error estimates. Now about the differences.
cross validation vs. out-of-bootstrap: cross validation (as well as random splitting procedures known as set validation or hold-out validation) use resampling without replacement wherea... | Are K-Fold Cross Validation , Bootstrap ,Out of Bag fundamentally same? | First of all, you're right about the similarities: they are all types of resampling-based error estimates. Now about the differences.
cross validation vs. out-of-bootstrap: cross validation (as well | Are K-Fold Cross Validation , Bootstrap ,Out of Bag fundamentally same?
First of all, you're right about the similarities: they are all types of resampling-based error estimates. Now about the differences.
cross validation vs. out-of-bootstrap: cross validation (as well as random splitting procedures known as set vali... | Are K-Fold Cross Validation , Bootstrap ,Out of Bag fundamentally same?
First of all, you're right about the similarities: they are all types of resampling-based error estimates. Now about the differences.
cross validation vs. out-of-bootstrap: cross validation (as well |
46,661 | Can a statistic depend on a parameter? | A statistic cannot be a function of unknown parameters by definition. In the case of the $t$ test our test statistic takes the form
$$
\frac{\sqrt{n}(\bar{x} - \mu_0)}{s}
$$
where $\mu_0$ is the hypothesized value for the unknown mean. That is, the $t$ statistic is a function of the data and the particular hypothesis... | Can a statistic depend on a parameter? | A statistic cannot be a function of unknown parameters by definition. In the case of the $t$ test our test statistic takes the form
$$
\frac{\sqrt{n}(\bar{x} - \mu_0)}{s}
$$
where $\mu_0$ is the hypo | Can a statistic depend on a parameter?
A statistic cannot be a function of unknown parameters by definition. In the case of the $t$ test our test statistic takes the form
$$
\frac{\sqrt{n}(\bar{x} - \mu_0)}{s}
$$
where $\mu_0$ is the hypothesized value for the unknown mean. That is, the $t$ statistic is a function of... | Can a statistic depend on a parameter?
A statistic cannot be a function of unknown parameters by definition. In the case of the $t$ test our test statistic takes the form
$$
\frac{\sqrt{n}(\bar{x} - \mu_0)}{s}
$$
where $\mu_0$ is the hypo |
46,662 | Can a statistic depend on a parameter? | A test statistic is a function of observable random variables whose distribution does not depend on any unknown parameters. For example, if n is large enough, then the central limit theorem says that the normal distribution with mean zero and variance one is approximately valid for the test statistic:
$$
T=\frac{\bar{... | Can a statistic depend on a parameter? | A test statistic is a function of observable random variables whose distribution does not depend on any unknown parameters. For example, if n is large enough, then the central limit theorem says that | Can a statistic depend on a parameter?
A test statistic is a function of observable random variables whose distribution does not depend on any unknown parameters. For example, if n is large enough, then the central limit theorem says that the normal distribution with mean zero and variance one is approximately valid f... | Can a statistic depend on a parameter?
A test statistic is a function of observable random variables whose distribution does not depend on any unknown parameters. For example, if n is large enough, then the central limit theorem says that |
46,663 | Do we say that the $y_i$'s are i.i.d. if $n_iy_i \sim \text{Binomial}(n_i, \theta)$? | The $y_i$ are clearly not identically distributed -- their distribution functions
differ!
For example:
$\qquad$ $\qquad$ Two scaled binomials with common $p=0.4$.
As you see, those two functions don't coincide, so the distributions are not identical.
As for how you could describe them, you might say the $y_i$ are ind... | Do we say that the $y_i$'s are i.i.d. if $n_iy_i \sim \text{Binomial}(n_i, \theta)$? | The $y_i$ are clearly not identically distributed -- their distribution functions
differ!
For example:
$\qquad$ $\qquad$ Two scaled binomials with common $p=0.4$.
As you see, those two functions don' | Do we say that the $y_i$'s are i.i.d. if $n_iy_i \sim \text{Binomial}(n_i, \theta)$?
The $y_i$ are clearly not identically distributed -- their distribution functions
differ!
For example:
$\qquad$ $\qquad$ Two scaled binomials with common $p=0.4$.
As you see, those two functions don't coincide, so the distributions ar... | Do we say that the $y_i$'s are i.i.d. if $n_iy_i \sim \text{Binomial}(n_i, \theta)$?
The $y_i$ are clearly not identically distributed -- their distribution functions
differ!
For example:
$\qquad$ $\qquad$ Two scaled binomials with common $p=0.4$.
As you see, those two functions don' |
46,664 | Accuracy of model lower than no-information rate? | This is a strong argument why you should never use a discontinuous improper accuracy scoring rule. It should also be a clue that any scoring rule that tempts you to remove data from the sample has to be bogus. If you were truly interested in all-or-nothing classification then just ignore all the data and predict that... | Accuracy of model lower than no-information rate? | This is a strong argument why you should never use a discontinuous improper accuracy scoring rule. It should also be a clue that any scoring rule that tempts you to remove data from the sample has to | Accuracy of model lower than no-information rate?
This is a strong argument why you should never use a discontinuous improper accuracy scoring rule. It should also be a clue that any scoring rule that tempts you to remove data from the sample has to be bogus. If you were truly interested in all-or-nothing classificat... | Accuracy of model lower than no-information rate?
This is a strong argument why you should never use a discontinuous improper accuracy scoring rule. It should also be a clue that any scoring rule that tempts you to remove data from the sample has to |
46,665 | Accuracy of model lower than no-information rate? | In highly skewed data sets beating the default accuracy can be very difficult and the ability to build a successful model may depend on how many positive examples you have and what the goals of your model are. Even with a very strong skew building reasonable models is possible, as an example the ipinyou data set has ap... | Accuracy of model lower than no-information rate? | In highly skewed data sets beating the default accuracy can be very difficult and the ability to build a successful model may depend on how many positive examples you have and what the goals of your m | Accuracy of model lower than no-information rate?
In highly skewed data sets beating the default accuracy can be very difficult and the ability to build a successful model may depend on how many positive examples you have and what the goals of your model are. Even with a very strong skew building reasonable models is p... | Accuracy of model lower than no-information rate?
In highly skewed data sets beating the default accuracy can be very difficult and the ability to build a successful model may depend on how many positive examples you have and what the goals of your m |
46,666 | Accuracy of model lower than no-information rate? | If I understand the question correctly, it should also be mentioned that a lot of the "standard" model statistics are meaningless on the test set as you have probably only applied the imbalance adjustment techniques on the training set. In this case, as @Jonno Bourne pointed out, the AUC would be a better accuracy meas... | Accuracy of model lower than no-information rate? | If I understand the question correctly, it should also be mentioned that a lot of the "standard" model statistics are meaningless on the test set as you have probably only applied the imbalance adjust | Accuracy of model lower than no-information rate?
If I understand the question correctly, it should also be mentioned that a lot of the "standard" model statistics are meaningless on the test set as you have probably only applied the imbalance adjustment techniques on the training set. In this case, as @Jonno Bourne po... | Accuracy of model lower than no-information rate?
If I understand the question correctly, it should also be mentioned that a lot of the "standard" model statistics are meaningless on the test set as you have probably only applied the imbalance adjust |
46,667 | Visualizing C5.0 Decision Tree? [closed] | I might be missing something in your question but simply
plot(myTree)
gives you a visualization of the tree (based on the infrastructure in partykit)
Of course the tree is very large and you either need to zoom into the image or use a large screen to read it...
You can also use partykit to just display subtrees. For ... | Visualizing C5.0 Decision Tree? [closed] | I might be missing something in your question but simply
plot(myTree)
gives you a visualization of the tree (based on the infrastructure in partykit)
Of course the tree is very large and you either | Visualizing C5.0 Decision Tree? [closed]
I might be missing something in your question but simply
plot(myTree)
gives you a visualization of the tree (based on the infrastructure in partykit)
Of course the tree is very large and you either need to zoom into the image or use a large screen to read it...
You can also us... | Visualizing C5.0 Decision Tree? [closed]
I might be missing something in your question but simply
plot(myTree)
gives you a visualization of the tree (based on the infrastructure in partykit)
Of course the tree is very large and you either |
46,668 | Gibbs sampler gets stuck in local mode | This is certainly possible. It often happens when variables are strongly correlated.
For simplicity, consider a two-parameter model. Because Gibbs sampling alters only one variable at a time, it can only move vertically or horizontally on the Cartesian plane. It will be unable to reach regions of high posterior probab... | Gibbs sampler gets stuck in local mode | This is certainly possible. It often happens when variables are strongly correlated.
For simplicity, consider a two-parameter model. Because Gibbs sampling alters only one variable at a time, it can | Gibbs sampler gets stuck in local mode
This is certainly possible. It often happens when variables are strongly correlated.
For simplicity, consider a two-parameter model. Because Gibbs sampling alters only one variable at a time, it can only move vertically or horizontally on the Cartesian plane. It will be unable to... | Gibbs sampler gets stuck in local mode
This is certainly possible. It often happens when variables are strongly correlated.
For simplicity, consider a two-parameter model. Because Gibbs sampling alters only one variable at a time, it can |
46,669 | Gibbs sampler gets stuck in local mode | Here's a simple case where Gibbs sampling gets stuck:
Imagine $(\theta_1,\theta_2)$ is a 50-50 mixture of two bivariate normals which each have independent components with variance 1.
The first mixture component is centered at $(10,-10)$. The second is centered at $(-10,10)$.
If you're in the top-left (green) mode, yo... | Gibbs sampler gets stuck in local mode | Here's a simple case where Gibbs sampling gets stuck:
Imagine $(\theta_1,\theta_2)$ is a 50-50 mixture of two bivariate normals which each have independent components with variance 1.
The first mixtur | Gibbs sampler gets stuck in local mode
Here's a simple case where Gibbs sampling gets stuck:
Imagine $(\theta_1,\theta_2)$ is a 50-50 mixture of two bivariate normals which each have independent components with variance 1.
The first mixture component is centered at $(10,-10)$. The second is centered at $(-10,10)$.
If ... | Gibbs sampler gets stuck in local mode
Here's a simple case where Gibbs sampling gets stuck:
Imagine $(\theta_1,\theta_2)$ is a 50-50 mixture of two bivariate normals which each have independent components with variance 1.
The first mixtur |
46,670 | P-Value for logistic regression model in R [closed] | Here's a way to do it:
x <- rnorm(100)
y <- factor(c(rep("ONE",50),rep("TWO",50)))
fmod <- glm(y~x,family = "binomial") ##"full" mod
nmod <- glm(y~1, family = 'binomial') ##"null" mod
anova(nmod, fmod, test = 'Chisq')
This output from this test will give the p value comparing the full model to the null model.
Analysi... | P-Value for logistic regression model in R [closed] | Here's a way to do it:
x <- rnorm(100)
y <- factor(c(rep("ONE",50),rep("TWO",50)))
fmod <- glm(y~x,family = "binomial") ##"full" mod
nmod <- glm(y~1, family = 'binomial') ##"null" mod
anova(nmod, fmod | P-Value for logistic regression model in R [closed]
Here's a way to do it:
x <- rnorm(100)
y <- factor(c(rep("ONE",50),rep("TWO",50)))
fmod <- glm(y~x,family = "binomial") ##"full" mod
nmod <- glm(y~1, family = 'binomial') ##"null" mod
anova(nmod, fmod, test = 'Chisq')
This output from this test will give the p value ... | P-Value for logistic regression model in R [closed]
Here's a way to do it:
x <- rnorm(100)
y <- factor(c(rep("ONE",50),rep("TWO",50)))
fmod <- glm(y~x,family = "binomial") ##"full" mod
nmod <- glm(y~1, family = 'binomial') ##"null" mod
anova(nmod, fmod |
46,671 | How is the standard error of a slope calculated when the intercept term is omitted? | The formulas are the same as always, so let's focus on understanding what's going on.
Here is a small cloud of points. Its slope is uncertain. (Indeed, the coordinates of these points were drawn independently from a standard Normal distribution and then moved a little to the side, as shown in subsequent plots.)
He... | How is the standard error of a slope calculated when the intercept term is omitted? | The formulas are the same as always, so let's focus on understanding what's going on.
Here is a small cloud of points. Its slope is uncertain. (Indeed, the coordinates of these points were drawn ind | How is the standard error of a slope calculated when the intercept term is omitted?
The formulas are the same as always, so let's focus on understanding what's going on.
Here is a small cloud of points. Its slope is uncertain. (Indeed, the coordinates of these points were drawn independently from a standard Normal di... | How is the standard error of a slope calculated when the intercept term is omitted?
The formulas are the same as always, so let's focus on understanding what's going on.
Here is a small cloud of points. Its slope is uncertain. (Indeed, the coordinates of these points were drawn ind |
46,672 | Difference between standard beta and unstandard beta distributions? | Standard beta distribution is beta distribution bounded in $(0, 1)$ interval, so it is what we generally refer to when talking about beta distribution. Beta is not standard if it has other bounds, denoted sometimes as $a$ and $b$ (lower and upper bound), you can find some information here.
So the general form of probab... | Difference between standard beta and unstandard beta distributions? | Standard beta distribution is beta distribution bounded in $(0, 1)$ interval, so it is what we generally refer to when talking about beta distribution. Beta is not standard if it has other bounds, den | Difference between standard beta and unstandard beta distributions?
Standard beta distribution is beta distribution bounded in $(0, 1)$ interval, so it is what we generally refer to when talking about beta distribution. Beta is not standard if it has other bounds, denoted sometimes as $a$ and $b$ (lower and upper bound... | Difference between standard beta and unstandard beta distributions?
Standard beta distribution is beta distribution bounded in $(0, 1)$ interval, so it is what we generally refer to when talking about beta distribution. Beta is not standard if it has other bounds, den |
46,673 | Interpreting interaction terms in Cox Proportional Hazard model | Interactions are tricky. The short answer is: the effect of v2 is bigger if v1 is 0.
For calculation the following holds. The interaction terms suggests that having a zero for v1 and a high v2 score increases readmission. Having a 1 for v1 and a high v2 score also increases readmission, but the same score for v2 leads ... | Interpreting interaction terms in Cox Proportional Hazard model | Interactions are tricky. The short answer is: the effect of v2 is bigger if v1 is 0.
For calculation the following holds. The interaction terms suggests that having a zero for v1 and a high v2 score i | Interpreting interaction terms in Cox Proportional Hazard model
Interactions are tricky. The short answer is: the effect of v2 is bigger if v1 is 0.
For calculation the following holds. The interaction terms suggests that having a zero for v1 and a high v2 score increases readmission. Having a 1 for v1 and a high v2 sc... | Interpreting interaction terms in Cox Proportional Hazard model
Interactions are tricky. The short answer is: the effect of v2 is bigger if v1 is 0.
For calculation the following holds. The interaction terms suggests that having a zero for v1 and a high v2 score i |
46,674 | Gibbs sampling from a complex full conditional | Here is an excerpt from our Monte Carlo Statistical Methods book:
10.3.3. Metropolizing the Gibbs Sampler
Hybrid MCMC algorithms are often useful
at an elementary level of the simulation process; that is,
when some components of the Gibbs sampler conditionals cannot be easily simulated. Rather than looking for a custom... | Gibbs sampling from a complex full conditional | Here is an excerpt from our Monte Carlo Statistical Methods book:
10.3.3. Metropolizing the Gibbs Sampler
Hybrid MCMC algorithms are often useful
at an elementary level of the simulation process; that | Gibbs sampling from a complex full conditional
Here is an excerpt from our Monte Carlo Statistical Methods book:
10.3.3. Metropolizing the Gibbs Sampler
Hybrid MCMC algorithms are often useful
at an elementary level of the simulation process; that is,
when some components of the Gibbs sampler conditionals cannot be eas... | Gibbs sampling from a complex full conditional
Here is an excerpt from our Monte Carlo Statistical Methods book:
10.3.3. Metropolizing the Gibbs Sampler
Hybrid MCMC algorithms are often useful
at an elementary level of the simulation process; that |
46,675 | Best basis set for polynomial expansion | It really depends on your needs.
However, with regression and other "linear-model" problems (such as GLMs), the standard choice is orthogonal polynomials with respect to the observed set of $x$ values (usually just called "orthogonal polynomials" in regression-type contexts). Many packages provide them (e.g. poly in R ... | Best basis set for polynomial expansion | It really depends on your needs.
However, with regression and other "linear-model" problems (such as GLMs), the standard choice is orthogonal polynomials with respect to the observed set of $x$ values | Best basis set for polynomial expansion
It really depends on your needs.
However, with regression and other "linear-model" problems (such as GLMs), the standard choice is orthogonal polynomials with respect to the observed set of $x$ values (usually just called "orthogonal polynomials" in regression-type contexts). Man... | Best basis set for polynomial expansion
It really depends on your needs.
However, with regression and other "linear-model" problems (such as GLMs), the standard choice is orthogonal polynomials with respect to the observed set of $x$ values |
46,676 | Best basis set for polynomial expansion | Orthogonal polynomials, by construction come with a weight function $w(x)$, so that orthogonality makes sense only when referring to $w(x)$. Choosing which orthogonal polynomials to use highly depends on your domain. For example, Legendre polynomials are define on $[-1,1]$ whereas Laguerre polynomials are defined on $[... | Best basis set for polynomial expansion | Orthogonal polynomials, by construction come with a weight function $w(x)$, so that orthogonality makes sense only when referring to $w(x)$. Choosing which orthogonal polynomials to use highly depends | Best basis set for polynomial expansion
Orthogonal polynomials, by construction come with a weight function $w(x)$, so that orthogonality makes sense only when referring to $w(x)$. Choosing which orthogonal polynomials to use highly depends on your domain. For example, Legendre polynomials are define on $[-1,1]$ wherea... | Best basis set for polynomial expansion
Orthogonal polynomials, by construction come with a weight function $w(x)$, so that orthogonality makes sense only when referring to $w(x)$. Choosing which orthogonal polynomials to use highly depends |
46,677 | Best basis set for polynomial expansion | First, you have to define what is "best". For instance, if you say that the best function is such that minimizes the least squared errors and while still being smooth, then you might end up with cubic spline basis. It all depends on your function and your understanding of what is "best". | Best basis set for polynomial expansion | First, you have to define what is "best". For instance, if you say that the best function is such that minimizes the least squared errors and while still being smooth, then you might end up with cubic | Best basis set for polynomial expansion
First, you have to define what is "best". For instance, if you say that the best function is such that minimizes the least squared errors and while still being smooth, then you might end up with cubic spline basis. It all depends on your function and your understanding of what is... | Best basis set for polynomial expansion
First, you have to define what is "best". For instance, if you say that the best function is such that minimizes the least squared errors and while still being smooth, then you might end up with cubic |
46,678 | Difference between pointwise mutual information and log likelihood ratio | They're very closely related. "Log-likelihood" is not a very useful name, because it sounds like it's going to be a log probability. But it's not that at all. "Log likelihood ratio" as used at times in the original paper is more correct, and that immediately hints that we should expect to see this metric as a compariso... | Difference between pointwise mutual information and log likelihood ratio | They're very closely related. "Log-likelihood" is not a very useful name, because it sounds like it's going to be a log probability. But it's not that at all. "Log likelihood ratio" as used at times i | Difference between pointwise mutual information and log likelihood ratio
They're very closely related. "Log-likelihood" is not a very useful name, because it sounds like it's going to be a log probability. But it's not that at all. "Log likelihood ratio" as used at times in the original paper is more correct, and that ... | Difference between pointwise mutual information and log likelihood ratio
They're very closely related. "Log-likelihood" is not a very useful name, because it sounds like it's going to be a log probability. But it's not that at all. "Log likelihood ratio" as used at times i |
46,679 | What is posterior predictive check, and how I can do that in R? | The hierarchical model you describe is a generative model. The model you constructed can be used to generate "fake" data. This is a little different conceptually than using your model to make predictions.
The assumption underlying this concept is that a good model should generate fake data that is similar to the actu... | What is posterior predictive check, and how I can do that in R? | The hierarchical model you describe is a generative model. The model you constructed can be used to generate "fake" data. This is a little different conceptually than using your model to make predic | What is posterior predictive check, and how I can do that in R?
The hierarchical model you describe is a generative model. The model you constructed can be used to generate "fake" data. This is a little different conceptually than using your model to make predictions.
The assumption underlying this concept is that a ... | What is posterior predictive check, and how I can do that in R?
The hierarchical model you describe is a generative model. The model you constructed can be used to generate "fake" data. This is a little different conceptually than using your model to make predic |
46,680 | How could I get a correlation value that accounts for gender? | This is called partial correlation, basically it, as Wikipedia notices,
measures the degree of association between two random variables, with
the effect of a set of controlling random variables removed
Having correlation coefficients of three variables $X$, $Y$ and $Z$ we can correct correlation $\rho_{XY}$ by con... | How could I get a correlation value that accounts for gender? | This is called partial correlation, basically it, as Wikipedia notices,
measures the degree of association between two random variables, with
the effect of a set of controlling random variables re | How could I get a correlation value that accounts for gender?
This is called partial correlation, basically it, as Wikipedia notices,
measures the degree of association between two random variables, with
the effect of a set of controlling random variables removed
Having correlation coefficients of three variables ... | How could I get a correlation value that accounts for gender?
This is called partial correlation, basically it, as Wikipedia notices,
measures the degree of association between two random variables, with
the effect of a set of controlling random variables re |
46,681 | What are the consequences of removing the tails of a distribution? | Removing observations below the $k$th percentile and above the (100-$k$)th before calculating some estimator is the same as trimming (i.e. calculating a trimmed estimator).
The effect of trimming on the distribution is effectively that of truncation at both ends. The impact of sample-based trimming on the observed dist... | What are the consequences of removing the tails of a distribution? | Removing observations below the $k$th percentile and above the (100-$k$)th before calculating some estimator is the same as trimming (i.e. calculating a trimmed estimator).
The effect of trimming on t | What are the consequences of removing the tails of a distribution?
Removing observations below the $k$th percentile and above the (100-$k$)th before calculating some estimator is the same as trimming (i.e. calculating a trimmed estimator).
The effect of trimming on the distribution is effectively that of truncation at ... | What are the consequences of removing the tails of a distribution?
Removing observations below the $k$th percentile and above the (100-$k$)th before calculating some estimator is the same as trimming (i.e. calculating a trimmed estimator).
The effect of trimming on t |
46,682 | Graphical lasso numerical problem (not SPD matrix result) | I ran into the same issue with some data I was using in my research- while I don't quite understand what leads to this mathematically/computationally, hopefully my answer and the code below helps:
Two comments on your problem:
The raw CSV file includes data fields which have not been de-meaned or scaled. Normalizing t... | Graphical lasso numerical problem (not SPD matrix result) | I ran into the same issue with some data I was using in my research- while I don't quite understand what leads to this mathematically/computationally, hopefully my answer and the code below helps:
Two | Graphical lasso numerical problem (not SPD matrix result)
I ran into the same issue with some data I was using in my research- while I don't quite understand what leads to this mathematically/computationally, hopefully my answer and the code below helps:
Two comments on your problem:
The raw CSV file includes data fie... | Graphical lasso numerical problem (not SPD matrix result)
I ran into the same issue with some data I was using in my research- while I don't quite understand what leads to this mathematically/computationally, hopefully my answer and the code below helps:
Two |
46,683 | Graphical lasso numerical problem (not SPD matrix result) | I also have run into this SPD problem. I was unable to avoid it by rescaling my data because I was interested in conducting simulations in a particular (strange) statistical regime.
I then found the recent GGLasso python package, using more recent ADMM algorithms to solve the GLasso problem. So far this has worked well... | Graphical lasso numerical problem (not SPD matrix result) | I also have run into this SPD problem. I was unable to avoid it by rescaling my data because I was interested in conducting simulations in a particular (strange) statistical regime.
I then found the r | Graphical lasso numerical problem (not SPD matrix result)
I also have run into this SPD problem. I was unable to avoid it by rescaling my data because I was interested in conducting simulations in a particular (strange) statistical regime.
I then found the recent GGLasso python package, using more recent ADMM algorithm... | Graphical lasso numerical problem (not SPD matrix result)
I also have run into this SPD problem. I was unable to avoid it by rescaling my data because I was interested in conducting simulations in a particular (strange) statistical regime.
I then found the r |
46,684 | Clustering a dense dataset | As @Anony-Mousse implies, it isn't clear right now that your data actually are clusterable. In the end, you may choose to simply chop your data into partitions, if that will serve your business purposes, but there may not be any real latent groupings.
From where I sit, I cannot provide any guaranteed solutions, but ... | Clustering a dense dataset | As @Anony-Mousse implies, it isn't clear right now that your data actually are clusterable. In the end, you may choose to simply chop your data into partitions, if that will serve your business purpo | Clustering a dense dataset
As @Anony-Mousse implies, it isn't clear right now that your data actually are clusterable. In the end, you may choose to simply chop your data into partitions, if that will serve your business purposes, but there may not be any real latent groupings.
From where I sit, I cannot provide any... | Clustering a dense dataset
As @Anony-Mousse implies, it isn't clear right now that your data actually are clusterable. In the end, you may choose to simply chop your data into partitions, if that will serve your business purpo |
46,685 | A method for propagating labels to unlabelled data | Learning from positive and unlabeled data is often referred to as PU learning. what you describe is a common approach to these kinds of problems, though I personally dislike such iterative approaches because they are highly sensitive to false positives (if you have any).
You might want to check out two of my papers and... | A method for propagating labels to unlabelled data | Learning from positive and unlabeled data is often referred to as PU learning. what you describe is a common approach to these kinds of problems, though I personally dislike such iterative approaches | A method for propagating labels to unlabelled data
Learning from positive and unlabeled data is often referred to as PU learning. what you describe is a common approach to these kinds of problems, though I personally dislike such iterative approaches because they are highly sensitive to false positives (if you have any... | A method for propagating labels to unlabelled data
Learning from positive and unlabeled data is often referred to as PU learning. what you describe is a common approach to these kinds of problems, though I personally dislike such iterative approaches |
46,686 | A method for propagating labels to unlabelled data | What you describe is very sound idea. It is called Semi-Supervised Expectation-Maximization and is oftenly used in text classification. Here is some literature:
http://research.microsoft.com/en-us/um/people/xiaohe/nips08/paperaccepted/nips2008wsl1_02.pdf
http://ciitresearch.org/dl/index.php/aiml/article/view/AIML052012... | A method for propagating labels to unlabelled data | What you describe is very sound idea. It is called Semi-Supervised Expectation-Maximization and is oftenly used in text classification. Here is some literature:
http://research.microsoft.com/en-us/um/ | A method for propagating labels to unlabelled data
What you describe is very sound idea. It is called Semi-Supervised Expectation-Maximization and is oftenly used in text classification. Here is some literature:
http://research.microsoft.com/en-us/um/people/xiaohe/nips08/paperaccepted/nips2008wsl1_02.pdf
http://ciitres... | A method for propagating labels to unlabelled data
What you describe is very sound idea. It is called Semi-Supervised Expectation-Maximization and is oftenly used in text classification. Here is some literature:
http://research.microsoft.com/en-us/um/ |
46,687 | What happens when I use gradient descent over a zero slope? | It won't -- gradient descent only finds a local minima*, and that "plateau" is one.
However, there are several ways to modify gradient descent to avoid problems like this one. One option is to re-run the descent algorithm multiple times, using different starting locations for each run. Runs started between B and C will... | What happens when I use gradient descent over a zero slope? | It won't -- gradient descent only finds a local minima*, and that "plateau" is one.
However, there are several ways to modify gradient descent to avoid problems like this one. One option is to re-run | What happens when I use gradient descent over a zero slope?
It won't -- gradient descent only finds a local minima*, and that "plateau" is one.
However, there are several ways to modify gradient descent to avoid problems like this one. One option is to re-run the descent algorithm multiple times, using different starti... | What happens when I use gradient descent over a zero slope?
It won't -- gradient descent only finds a local minima*, and that "plateau" is one.
However, there are several ways to modify gradient descent to avoid problems like this one. One option is to re-run |
46,688 | What happens when I use gradient descent over a zero slope? | Simple answer: it won't.
Gradient descent climbs down a hill. If it reaches a plateau, it considers the algorithm converged and moves no more.
If you think that this is a fault of gradient descent, one should know that multi-modal problems are very difficult and outside of a fine grid search (which can easily be proh... | What happens when I use gradient descent over a zero slope? | Simple answer: it won't.
Gradient descent climbs down a hill. If it reaches a plateau, it considers the algorithm converged and moves no more.
If you think that this is a fault of gradient descent, | What happens when I use gradient descent over a zero slope?
Simple answer: it won't.
Gradient descent climbs down a hill. If it reaches a plateau, it considers the algorithm converged and moves no more.
If you think that this is a fault of gradient descent, one should know that multi-modal problems are very difficult... | What happens when I use gradient descent over a zero slope?
Simple answer: it won't.
Gradient descent climbs down a hill. If it reaches a plateau, it considers the algorithm converged and moves no more.
If you think that this is a fault of gradient descent, |
46,689 | What happens when I use gradient descent over a zero slope? | There's only one thing you need to know about gradient descent. It is complete and utter garbage, and an absolutely horrible algorithm which should not even be considered unless there are at least hundreds of millions of variables, in which case don't expect it to work well, except when solving the same problem over an... | What happens when I use gradient descent over a zero slope? | There's only one thing you need to know about gradient descent. It is complete and utter garbage, and an absolutely horrible algorithm which should not even be considered unless there are at least hun | What happens when I use gradient descent over a zero slope?
There's only one thing you need to know about gradient descent. It is complete and utter garbage, and an absolutely horrible algorithm which should not even be considered unless there are at least hundreds of millions of variables, in which case don't expect i... | What happens when I use gradient descent over a zero slope?
There's only one thing you need to know about gradient descent. It is complete and utter garbage, and an absolutely horrible algorithm which should not even be considered unless there are at least hun |
46,690 | Bayesian regression full conditional distribution | As you show in the reproduction what is written in this book, the solution is incorrect for the simple reason that the quantity $(X^{T}X)^{-1}$ is a $p\times p$ matrix, not a scalar. Hence you cannot divide by $(X^{T}X)^{-1}$. (This is a terrible way of explaining this standard derivation!)
What you can write instead i... | Bayesian regression full conditional distribution | As you show in the reproduction what is written in this book, the solution is incorrect for the simple reason that the quantity $(X^{T}X)^{-1}$ is a $p\times p$ matrix, not a scalar. Hence you cannot | Bayesian regression full conditional distribution
As you show in the reproduction what is written in this book, the solution is incorrect for the simple reason that the quantity $(X^{T}X)^{-1}$ is a $p\times p$ matrix, not a scalar. Hence you cannot divide by $(X^{T}X)^{-1}$. (This is a terrible way of explaining this ... | Bayesian regression full conditional distribution
As you show in the reproduction what is written in this book, the solution is incorrect for the simple reason that the quantity $(X^{T}X)^{-1}$ is a $p\times p$ matrix, not a scalar. Hence you cannot |
46,691 | In a neural network, do biases essentially need updates when being trained? | If you try to leave the biases fixed at any value, then each neuron will try to use its "overall input activation" as a kind of bias (by having a small weight to all of its inputs). This makes your learning less stable than just having a bias. If you try to fight this desire with regularization, it may not be possibl... | In a neural network, do biases essentially need updates when being trained? | If you try to leave the biases fixed at any value, then each neuron will try to use its "overall input activation" as a kind of bias (by having a small weight to all of its inputs). This makes your l | In a neural network, do biases essentially need updates when being trained?
If you try to leave the biases fixed at any value, then each neuron will try to use its "overall input activation" as a kind of bias (by having a small weight to all of its inputs). This makes your learning less stable than just having a bias.... | In a neural network, do biases essentially need updates when being trained?
If you try to leave the biases fixed at any value, then each neuron will try to use its "overall input activation" as a kind of bias (by having a small weight to all of its inputs). This makes your l |
46,692 | In a neural network, do biases essentially need updates when being trained? | The idea is to learn the bias weights but have the activation fixed at 1. Anything else would make it an additional ordinary unit. | In a neural network, do biases essentially need updates when being trained? | The idea is to learn the bias weights but have the activation fixed at 1. Anything else would make it an additional ordinary unit. | In a neural network, do biases essentially need updates when being trained?
The idea is to learn the bias weights but have the activation fixed at 1. Anything else would make it an additional ordinary unit. | In a neural network, do biases essentially need updates when being trained?
The idea is to learn the bias weights but have the activation fixed at 1. Anything else would make it an additional ordinary unit. |
46,693 | Significance of regression coefficients and their equality | Yes. This answer interprets the question in the following way:
$\beta_1$ is significantly different from zero in the full model
$$y = \alpha + \beta_1 x_1 + \beta_2 x_2 + \varepsilon$$
$\beta_2$ is not significantly different from zero in the full model.
Either (a) $\beta_1=\beta_2$ or (b) a test of $H_0:\beta_1=\bet... | Significance of regression coefficients and their equality | Yes. This answer interprets the question in the following way:
$\beta_1$ is significantly different from zero in the full model
$$y = \alpha + \beta_1 x_1 + \beta_2 x_2 + \varepsilon$$
$\beta_2$ is | Significance of regression coefficients and their equality
Yes. This answer interprets the question in the following way:
$\beta_1$ is significantly different from zero in the full model
$$y = \alpha + \beta_1 x_1 + \beta_2 x_2 + \varepsilon$$
$\beta_2$ is not significantly different from zero in the full model.
Eith... | Significance of regression coefficients and their equality
Yes. This answer interprets the question in the following way:
$\beta_1$ is significantly different from zero in the full model
$$y = \alpha + \beta_1 x_1 + \beta_2 x_2 + \varepsilon$$
$\beta_2$ is |
46,694 | Use Edge detection in Image classification | Your approach goes in the line of the popular histogram of gradients approach. See here and the corresponding Wikipedia entry. Now unless you have some already labelled data, training such a system is quite laborious. If possible, I would start by using some available implementation to experiment with, like the one off... | Use Edge detection in Image classification | Your approach goes in the line of the popular histogram of gradients approach. See here and the corresponding Wikipedia entry. Now unless you have some already labelled data, training such a system is | Use Edge detection in Image classification
Your approach goes in the line of the popular histogram of gradients approach. See here and the corresponding Wikipedia entry. Now unless you have some already labelled data, training such a system is quite laborious. If possible, I would start by using some available implemen... | Use Edge detection in Image classification
Your approach goes in the line of the popular histogram of gradients approach. See here and the corresponding Wikipedia entry. Now unless you have some already labelled data, training such a system is |
46,695 | Use Edge detection in Image classification | If you are going to use edge detection, you will have to use distance transform to do the kind of classification you are thinking of. Once that is done you need to create a distance matrix between the test image(s) (ones without the label) and the training image(s) (ones with the label).
But may I suggest using HoG tra... | Use Edge detection in Image classification | If you are going to use edge detection, you will have to use distance transform to do the kind of classification you are thinking of. Once that is done you need to create a distance matrix between the | Use Edge detection in Image classification
If you are going to use edge detection, you will have to use distance transform to do the kind of classification you are thinking of. Once that is done you need to create a distance matrix between the test image(s) (ones without the label) and the training image(s) (ones with ... | Use Edge detection in Image classification
If you are going to use edge detection, you will have to use distance transform to do the kind of classification you are thinking of. Once that is done you need to create a distance matrix between the |
46,696 | Use Edge detection in Image classification | Remember that when doing computer vision and image processing you should assure that all images are taken in the same conditions. Preliminary preparation of data (exposure, resizing, lighting, filtering etc.) dramatically reduces problems that might occur later.
Yes, choosing image pixels as an input features seems to... | Use Edge detection in Image classification | Remember that when doing computer vision and image processing you should assure that all images are taken in the same conditions. Preliminary preparation of data (exposure, resizing, lighting, filteri | Use Edge detection in Image classification
Remember that when doing computer vision and image processing you should assure that all images are taken in the same conditions. Preliminary preparation of data (exposure, resizing, lighting, filtering etc.) dramatically reduces problems that might occur later.
Yes, choosing... | Use Edge detection in Image classification
Remember that when doing computer vision and image processing you should assure that all images are taken in the same conditions. Preliminary preparation of data (exposure, resizing, lighting, filteri |
46,697 | Visualizing relationship between independent variable and binary response | I can't speak to the modeling (except to guess that the bend near 100 is too sharp to be captured by a logistic curve), but a visualization idea is to continue your binning idea to the extreme. Consider a bin for every possible interactions value which extends some fixed amount on each side. Compute the mean and CI for... | Visualizing relationship between independent variable and binary response | I can't speak to the modeling (except to guess that the bend near 100 is too sharp to be captured by a logistic curve), but a visualization idea is to continue your binning idea to the extreme. Consid | Visualizing relationship between independent variable and binary response
I can't speak to the modeling (except to guess that the bend near 100 is too sharp to be captured by a logistic curve), but a visualization idea is to continue your binning idea to the extreme. Consider a bin for every possible interactions value... | Visualizing relationship between independent variable and binary response
I can't speak to the modeling (except to guess that the bend near 100 is too sharp to be captured by a logistic curve), but a visualization idea is to continue your binning idea to the extreme. Consid |
46,698 | Gaussian process regression: leave-one-out prediction | In the general noisy or ``signal $+$ noise'' framework $y_i = f(\mathbf{x}_i)
+ \epsilon_i$, several observations can be done at the same location
$\mathbf{x}_i$, so the notations $Y(\mathbf{x}_i)$ and $f_{-i}$
can be misleading then.
Suppose first that the $n$ locations $\mathbf{x}_i$ are distinct, so
that deleting ... | Gaussian process regression: leave-one-out prediction | In the general noisy or ``signal $+$ noise'' framework $y_i = f(\mathbf{x}_i)
+ \epsilon_i$, several observations can be done at the same location
$\mathbf{x}_i$, so the notations $Y(\mathbf{x}_i)$ an | Gaussian process regression: leave-one-out prediction
In the general noisy or ``signal $+$ noise'' framework $y_i = f(\mathbf{x}_i)
+ \epsilon_i$, several observations can be done at the same location
$\mathbf{x}_i$, so the notations $Y(\mathbf{x}_i)$ and $f_{-i}$
can be misleading then.
Suppose first that the $n$ lo... | Gaussian process regression: leave-one-out prediction
In the general noisy or ``signal $+$ noise'' framework $y_i = f(\mathbf{x}_i)
+ \epsilon_i$, several observations can be done at the same location
$\mathbf{x}_i$, so the notations $Y(\mathbf{x}_i)$ an |
46,699 | Likelihood-based hypothesis testing | As noted in the comments, the Wald statistic is simple, powerful and therefore a good choice for this problem. Now, for two Poisson populations, presumably independent, we wish to test the hypotheses that their parameters are equal, namely:
$$H_0: \lambda_1=\lambda_2\quad \text{vs} \quad H_1 :\lambda_1 \neq \lambda_2$$... | Likelihood-based hypothesis testing | As noted in the comments, the Wald statistic is simple, powerful and therefore a good choice for this problem. Now, for two Poisson populations, presumably independent, we wish to test the hypotheses | Likelihood-based hypothesis testing
As noted in the comments, the Wald statistic is simple, powerful and therefore a good choice for this problem. Now, for two Poisson populations, presumably independent, we wish to test the hypotheses that their parameters are equal, namely:
$$H_0: \lambda_1=\lambda_2\quad \text{vs} \... | Likelihood-based hypothesis testing
As noted in the comments, the Wald statistic is simple, powerful and therefore a good choice for this problem. Now, for two Poisson populations, presumably independent, we wish to test the hypotheses |
46,700 | t-statistic before-after | You did the problem correctly; the site did not. It committed a well-known error of not retaining intermediate results to sufficient precision, causing its final answer to be erroneous.
Forensic Analysis
This site takes the student through a guided sequence of questions to go through the steps of conducting a t-test.... | t-statistic before-after | You did the problem correctly; the site did not. It committed a well-known error of not retaining intermediate results to sufficient precision, causing its final answer to be erroneous.
Forensic Ana | t-statistic before-after
You did the problem correctly; the site did not. It committed a well-known error of not retaining intermediate results to sufficient precision, causing its final answer to be erroneous.
Forensic Analysis
This site takes the student through a guided sequence of questions to go through the step... | t-statistic before-after
You did the problem correctly; the site did not. It committed a well-known error of not retaining intermediate results to sufficient precision, causing its final answer to be erroneous.
Forensic Ana |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.